Android JNI

If you are maintaining a cross-platform SDK, you must know how C++ and Java interact. As a C++ engineer, I just learnt how to use JNI to accompalish work formerly, like how C++ access Java or how Java access C++, actually I didn’t know why. I am full of questions. For example,

  1. Why native interfaces should be named and exported as packagename_classname_functionname?
  2. Why JNI_OnLoad will be called when dynamic lib is loaded?
  3. What have been done when we call AttachCurrentThread or DetachCurrentThread?
  4. What are Local Reference and Global Reference? how do they work?
  5. What is JavaVM? What is JNIEnv?
  6. What’s the relationship between JVM and JNI?

This article will answer these doubts and give you a clearer understanding of JNI and JVM. All the posted Android source code is Android13 version.

Android JVM evolution history

When we talk about JVM(java virtual machine), GC(Garbage Collection) is always mentioned firstly, yes, GC helps software engineers free from heap memory mallocing and relesing, so we love it! But, the original intention, java program language is designed for cross-platform development. Different CPUs have dirrerent set of instructions, we need to develop corresponding compiler to compile C++, it is a big job. Therefor, Java was born, java code is compiled to bytecode, different platforms can interpret these byte code to corresponding instructions and perform these translated instructions. Because Java interpreter is much lighter than C++ compiler, so more and more architectures support Java language.

Java interoreter’s performance is lower than C++ compiled native code. Although Java can easily compatible with niche platforms, If java wants to gain a foothold in the major platforms, ex. Android, optimization is needed.

JVM evolution history of Android is constantly adjust the design of compiling byte code to native code. On the one hand, JVM should play the role of compiler, to compile hot code to native code to imporve performance. On the other hand, JVM should find the balance between performance, delay, storage, and power consumption.

Android version Modificatons advantages disadvantages Formats
Android 2.2 Dalvik, Support JIT(Just In Time) Hot code is compiled at app running time. No cache, every startup requires recompilation, high power consumption. .odex
Android 4.4 ART, Support AOT(Ahead Of Time) Compiled once when app is installing. Take more time to install and take up more storage. .oat
Android 7.0 ART, Support JIT+AOT(All Of the Time compilation) Compileing hot code when your device is in idle or charging state. Performs poorly when first run. .oat

Dalvik and ART are all android virtual machines, Dalvik interprets .dex file to native code to perform, JIT hleps to compile hot code to native code to improve performance. ART(android runtime) performs native code (.oat file) which is compiled from .dex files when app is installed.

Dex file is compiled from java resources, which is an intermediate byte code format. Android app contains more than one dex file, these dex files will be loaded and interpreted as needed. Details refer to:
https://source.android.com/docs/core/runtime/dex-format

Loading classes form dex files is inefficient, oat is ELF file, whitch contains all dex files and compiled native code. ART performs native code firstly, if code haven’t been compiled to native code, ART will perform dex byte code by interpreter.

After android 7.0, ART changes the compiling strategy, instead of Head Of Compiling(compiling in installing stage), All Of the Time compilation is more reasonable. After app installation, ART interprets dex at first running time (performing byte code), then ART will trace hot code and compiling to native code asynchronously via JIT, ART will perform compiled native code at next running time. Details refer to:https://source.android.com/docs/core/runtime/jit-compiler

After android 8.0, ART add vdex/art file type, vdex file is splited from oat file and contains uncompressed dex files and metadata, it helps to avoid dex files unpacking and verifing when android system is updated. art file is memory image which caches hot objects like ArtField/ArtMethod/DexCheche/ClassTable/…, ART will map art file to structured memory when app startup, it helps to speed up class loading from dex file or oat file.

File Format Description
.dex java byte code
.odex/.oat optimized dex, ELF format
.vdex verified dex, contains raw dex and quicken info
.art image file, cached hot objects like string/method/type/…

Android Zygote

When is the JVM loaded? Does every android app need load JVM? Android system plays a trick, Android disigns a prcocess, which is responsible for loading resoures that is vital to an android app, for example, Jave FrameWork, JVM and JNI functions. This process is Zygote, almost every user process in android system is forked by Zygote process. Since Zygote has helped with initialization, forked prcesses donot need do it again. The image below describes Android system’s starting up:

Init process is the first user process started by kernel, and is the father of all the user processes. Init Process will start Zygote process when loading init.rc file. Zygote will start system servers, such as audioserver\cameraserver\meida\netd..., details as below:

\system\core\rootdir\init.zygote64.rc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
service zygote /system/bin/app_process64 -Xzygote /system/bin --zygote --start-system-server --socket-name=zygote
class main
priority -20
user root
group root readproc reserved_disk
socket zygote stream 660 root system
socket usap_pool_primary stream 660 root system
onrestart exec_background - system system -- /system/bin/vdc volume abort_fuse
onrestart write /sys/power/state on
onrestart restart audioserver
onrestart restart cameraserver
onrestart restart media
onrestart restart media.tuner
onrestart restart netd
onrestart restart wificond
task_profiles ProcessCapacityHigh MaxPerformance
critical window=${zygote.critical_window.minute:-off} target=zygote-fatal

These service-servers help Zygote and its forked processes to access io devices, this stripped out modular design makes zygote’s responsibilities more simple.

In addition, Zygote shared RAM pages across all app processes which forked by Zygote to reduce memory usage. Static data is mmapped into a process. This not only allows that same data to be shared between processes but also allows it to be paged out when needed. eg. Dalvik code (pre-linked .odex file), app resources, native code in .so files.

App needs to start activity and load resources(eg. dex and so) of himself when it is forked by Zygote, Zygote and JVM are all C++ code, resources finally will be stored in structured memories which are C++ objects, for example, share libraries will be load to SharedLibrary object stored in an array, java classes will be load to mirror::Class object stored in ClassTable, java class’s methods/fields are loaded to ArtMethod/ArtField objects stored in mirror::Class.

ART JVM source code path:

\art\runtime\

Register Java native function

JNI(Java Native Interface) is an interface which helps Java interactives with native code (Probably compiled from C++/C code).

Java access C++ code, two ways:

  • Static registation: dynamic libraries expose native interfaces according to specified rules. ART will search these exposed interfaces and update native interfaces’ addresses to ArtMethod objects by calling ClassLinker::RegisterNative when loading java class to memory.
  • Dynamic registation: dynamic libraries register native interface actively by calling JNIEnv::RegisterNatives in JNI_OnLoad function which will be called when loading dynamic libraries, JNIEnv::RegisterNatives will update native interfaces’ addresses to ArtMethod object.

Java example code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
 package com.example.android.simplejni;

import android.app.Activity;
import android.os.Bundle;
import android.widget.TextView;

public class SimpleJNI extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
TextView tv = new TextView(this);
int sum = Native.add(2, 3);
tv.setText("2 + 3 = " + Integer.toString(sum));
setContentView(tv);
}
}

class Native {
static {
// The runtime will add "lib" on the front and ".o" on the end of
// the name supplied to loadLibrary.
System.loadLibrary("simplejni");
}

static native int add(int a, int b);
}

Statically register C++ code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
 //com_example_android_simplejni_Native.h
#ifndef _Included_com_example_android_simplejni_Native
#define _Included_com_example_android_simplejni_Native
#define JNIEXPORT __attribute__((visibility("default")))
#ifdef __ANDROID__
#include <jni.h>

#ifdef __cplusplus
extern "C" {
#endif
JNIEXPORT jint JNICALL Com_example_android_simplejni_Native_add
(JNIEnv *, jclass, jint, jint);

#ifdef __cplusplus
}
#endif
#endif
#endif

//com_example_android_simplejni_Native.cpp
#include "com_example_android_simplejni_Native.h"
#ifdef __ANDROID__
JNIEXPORT jint JNICALL Com_example_android_simplejni_Native_add
(JNIEnv *env, jobject thiz, jint a, jint b){
int result = (int)a + (int)b;
return (jint)result;
}
#endif

Dynamically register C++ code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
 #define LOG_TAG "simplejni native.cpp"
#include <stdio.h>
#include "jni.h"

static jint
add(JNIEnv* /*env*/, jobject /*thiz*/, jint a, jint b) {
int result = a + b;
return result;
}

static const char *classPathName = "com/example/android/simplejni/Native";

static JNINativeMethod methods[] = {
{"add", "(II)I", (void*)add },
};

/*
* Register several native methods for one class.
*/
static int registerNativeMethods(JNIEnv* env, const char* className,
JNINativeMethod* gMethods, int numMethods)
{
jclass clazz;

clazz = env->FindClass(className);
if (clazz == NULL) {
return JNI_FALSE;
}
if (env->RegisterNatives(clazz, gMethods, numMethods) < 0) {
return JNI_FALSE;
}
return JNI_TRUE;
}


static int registerNatives(JNIEnv* env)
{
if (!registerNativeMethods(env, classPathName,
methods, sizeof(methods) / sizeof(methods[0]))) {
return JNI_FALSE;
}

return JNI_TRUE;
}

typedef union {
JNIEnv* env;
void* venv;
} UnionJNIEnvToVoid;

JNIEXPORT jint JNI_OnLoad(JavaVM* vm, void* /*reserved*/)
{
UnionJNIEnvToVoid uenv;
uenv.venv = NULL;
jint result = -1;
JNIEnv* env = NULL;

if (vm->GetEnv(&uenv.venv, JNI_VERSION_1_4) != JNI_OK) {
goto bail;
}
env = uenv.env;

if (registerNatives(env) != JNI_TRUE) {
goto bail;
}

result = JNI_VERSION_1_4;
bail:
return result;
}

Loading dynamic library

There are two ways to load dynamic library,

  • System.load(“/storage/emulated/0/libnative-lib.so”)
  • System.loadLibrary(“native-lib”);

Usually, we call System.loadLibrary API to load dynamic library, so, what is library’s location? Which directories to search?

Let’s trace the source code of Android:

/ibcore/ojluni/src/main/java/java/lang/System.java

1
2
3
public static void loadLibrary(String libname) {
Runtime.getRuntime().loadLibrary0(Reflection.getCallerClass(), libname);
}

/libcore/ojluni/src/main/java/java/lang/Runtime.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
void loadLibrary0(Class<?> fromClass, String libname) {
ClassLoader classLoader = ClassLoader.getClassLoader(fromClass);
loadLibrary0(classLoader, fromClass, libname);
}

private synchronized void loadLibrary0(ClassLoader loader, Class<?> callerClass, String libname) {
//
// 'libname' which is the input parameter of System.loadLibrary function shoud be the middle name of your library.
// If your lib name is libTest.so, 'libname' parameter should be "Test", if you input "libTest.so" as the parameter, stupid code will search "liblibTest.so.so".
//
if (libname.indexOf((int)File.separatorChar) != -1) {
throw new UnsatisfiedLinkError(
"Directory separator should not appear in library name: " + libname);
}


String libraryName = libname;
if (loader != null && !(loader instanceof BootClassLoader)) {
String filename = loader.findLibrary(libraryName);
if (filename == null &&
(loader.getClass() == PathClassLoader.class ||
loader.getClass() == DelegateLastClassLoader.class)) {
//
// Don't give up even if we failed to find the library in the native lib paths.
// The underlying dynamic linker might be able to find the lib in one of the linker
// namespaces associated with the current linker namespace.
// System.mapLibraryName("Test") return "libTest.so", this function do nothing but return the full library name.
//
filename = System.mapLibraryName(libraryName);
}
if (filename == null) {
throw new UnsatisfiedLinkError(loader + " couldn't find \"" +
System.mapLibraryName(libraryName) + "\"");
}

String error = nativeLoad(filename, loader);
if (error != null) {
throw new UnsatisfiedLinkError(error);
}
return;
}

//
// nativeLoad is actually a native function, which does two things:
// 1) Load library to VM.
// 2) Call JNI_OnLoad function in your library.
//
getLibPaths();
String filename = System.mapLibraryName(libraryName);
String error = nativeLoad(filename, loader, callerClass);
if (error != null) {
throw new UnsatisfiedLinkError(error);
}
}

\libcore\ojluni\src\main\native\Runtime.c

1
2
3
4
5
Runtime_nativeLoad(JNIEnv* env, jclass ignored, jstring javaFilename,
jobject javaLoader, jclass caller)
{
return JVM_NativeLoad(env, javaFilename, javaLoader, caller);
}

\art\openjdkjvm\OpenjdkJvm.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
JNIEXPORT jstring JVM_NativeLoad(JNIEnv* env,
jstring javaFilename,
jobject javaLoader,
jclass caller) {
ScopedUtfChars filename(env, javaFilename);
if (filename.c_str() == nullptr) {
return nullptr;
}

std::string error_msg;
{
art::JavaVMExt* vm = art::Runtime::Current()->GetJavaVM();
bool success = vm->LoadNativeLibrary(env,
filename.c_str(),
javaLoader,
caller,
&error_msg);
if (success) {
return nullptr;
}
}

// Don't let a pending exception from JNI_OnLoad cause a CheckJNI issue with NewStringUTF.
env->ExceptionClear();
return env->NewStringUTF(error_msg.c_str());
}

\art\runtime\jni\java_vm_ext.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
bool JavaVMExt::LoadNativeLibrary(JNIEnv* env,
const std::string& path,
jobject class_loader,
jclass caller_class,
std::string* error_msg) {
...
// See if we've already loaded this library. If we have, and the class loader
// matches, return successfully without doing anything.
// TODO: for better results we should canonicalize the pathname (or even compare
// inodes). This implementation is fine if everybody is using System.loadLibrary.
SharedLibrary* library;
Thread* self = Thread::Current();
....
// Create a new entry.
// TODO: move the locking (and more of this logic) into Libraries.
bool created_library = false;
{
// Create SharedLibrary ahead of taking the libraries lock to maintain lock ordering.
std::unique_ptr<SharedLibrary> new_library(
new SharedLibrary(env,
self,
path,
handle,
needs_native_bridge,
class_loader,
class_loader_allocator));

MutexLock mu(self, *Locks::jni_libraries_lock_);
library = libraries_->Get(path);
if (library == nullptr) { // We won race to get libraries_lock.
library = new_library.release();
libraries_->Put(path, library);
created_library = true;
}
}
if (!created_library) {
LOG(INFO) << "WOW: we lost a race to add shared library: "
<< "\"" << path << "\" ClassLoader=" << class_loader;
return library->CheckOnLoadResult();
}
VLOG(jni) << "[Added shared library \"" << path << "\" for ClassLoader " << class_loader << "]";

bool was_successful = false;
void* sym = library->FindSymbol("JNI_OnLoad", nullptr);
if (sym == nullptr) {
VLOG(jni) << "[No JNI_OnLoad found in \"" << path << "\"]";
was_successful = true;
} else {
// Call JNI_OnLoad. We have to override the current class
// loader, which will always be "null" since the stuff at the
// top of the stack is around Runtime.loadLibrary(). (See
// the comments in the JNI FindClass function.)
ScopedLocalRef<jobject> old_class_loader(env, env->NewLocalRef(self->GetClassLoaderOverride()));
self->SetClassLoaderOverride(class_loader);

VLOG(jni) << "[Calling JNI_OnLoad in \"" << path << "\"]";
using JNI_OnLoadFn = int(*)(JavaVM*, void*);
JNI_OnLoadFn jni_on_load = reinterpret_cast<JNI_OnLoadFn>(sym);
int version = (*jni_on_load)(this, nullptr);

....
}

library->SetResult(was_successful);
return was_successful;
}

We should pay attention to two places:

  1. ClassLoader class helps to search dynamic library:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    private synchronized void loadLibrary0(...) {
    ...
    String libraryName = libname;
    if (loader != null && !(loader instanceof BootClassLoader)) {
    ...
    String filename = loader.findLibrary(libraryName);
    ...
    }
    ...
    }
    ...
    }
  2. JNI_OnLoad will be called when dynamic library is loaded.
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    bool JavaVMExt::LoadNativeLibrary(...) {

    void* sym = library->FindSymbol("JNI_OnLoad", nullptr);
    if (sym == nullptr) {
    ...
    } else {
    ...
    using JNI_OnLoadFn = int(*)(JavaVM*, void*);
    JNI_OnLoadFn jni_on_load = reinterpret_cast<JNI_OnLoadFn>(sym);
    int version = (*jni_on_load)(this, nullptr);
    }
    }

Next question: where does ClassLoader search?
Let’s trace the source code:

libcore\dalvik\src\main\java\dalvik\system\DexPathList.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
// DexPathList#DexPathList construction function
DexPathList(ClassLoader definingContext, String dexPath,
String librarySearchPath, File optimizedDirectory, boolean isTrusted) {
...

// Native libraries may exist in both the system and
// application library paths, and we use this search order:
//
// 1. This class loader's library path for application libraries (librarySearchPath):
// 1.1. Native library directories
// 1.2. Path to libraries in apk-files
// 2. The VM's library path from the system property for system libraries
// also known as java.library.path
//
// This order was reversed prior to Gingerbread; see http://b/2933456.
this.nativeLibraryDirectories = splitPaths(librarySearchPath, false);
this.systemNativeLibraryDirectories =
splitPaths(System.getProperty("java.library.path"), true);
this.nativeLibraryPathElements = makePathElements(getAllNativeLibraryDirectories());

...
}

// DexPathList#addNativePath
public void addNativePath(Collection<String> libPaths) {
if (libPaths.isEmpty()) {
return;
}
List<File> libFiles = new ArrayList<>(libPaths.size());
for (String path : libPaths) {
libFiles.add(new File(path));
}
ArrayList<NativeLibraryElement> newPaths =
new ArrayList<>(nativeLibraryPathElements.length + libPaths.size());
newPaths.addAll(Arrays.asList(nativeLibraryPathElements));
for (NativeLibraryElement element : makePathElements(libFiles)) {
if (!newPaths.contains(element)) {
newPaths.add(element);
}
}
nativeLibraryPathElements = newPaths.toArray(new NativeLibraryElement[newPaths.size()]);
}

/frameworks/base/core/java/android/app/ApplicationLoaders.java

1
2
3
4
5
6
7
8
// ApplicationLoaders#addNative
void addNative(ClassLoader classLoader, Collection<String> libPaths) {
if (!(classLoader instanceof PathClassLoader)) {
throw new IllegalStateException("class loader is not a PathClassLoader");
}
final PathClassLoader baseDexClassLoader = (PathClassLoader) classLoader;
baseDexClassLoader.addNativePath(libPaths);
}

/frameworks/base/core/java/android/app/LoadedApk.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
// LoadedApk#createOrUpdateClassLoaderLocked
private void createOrUpdateClassLoaderLocked(List<String> addedPaths) {
...
final List<String> libPaths = new ArrayList<>(10);
...
final String defaultSearchPaths = System.getProperty("java.library.path");
final boolean treatVendorApkAsUnbundled = !defaultSearchPaths.contains("/vendor/lib");
if (mApplicationInfo.getCodePath() != null
&& mApplicationInfo.isVendor() && treatVendorApkAsUnbundled) {
isBundledApp = false;
}

if (mApplicationInfo.getCodePath() != null
&& mApplicationInfo.isProduct()
&& VndkProperties.product_vndk_version().isPresent()) {
isBundledApp = false;
}

makePaths(mActivityThread, isBundledApp, mApplicationInfo, zipPaths, libPaths);
String libraryPermittedPath = canAccessDataDir() ? mDataDir : "";

if (isBundledApp) {
libraryPermittedPath += File.pathSeparator
+ Paths.get(getAppDir()).getParent().toString();

// This is necessary to grant bundled apps access to
// libraries located in subdirectories of /system/lib
libraryPermittedPath += File.pathSeparator + defaultSearchPaths;
}

final String librarySearchPath = TextUtils.join(File.pathSeparator, libPaths);
...
if (!libPaths.isEmpty()) {
// Temporarily disable logging of disk reads on the Looper thread as this is necessary
StrictMode.ThreadPolicy oldPolicy = allowThreadDiskReads();
try {
ApplicationLoaders.getDefault().addNative(mDefaultClassLoader, libPaths);
} finally {
setThreadPolicy(oldPolicy);
}
}
...
}

// LoadedApk#makePaths
public static void makePaths(ActivityThread activityThread,
boolean isBundledApp,
ApplicationInfo aInfo,
List<String> outZipPaths,
List<String> outLibPaths) {
final String appDir = aInfo.sourceDir;
final String libDir = aInfo.nativeLibraryDir;

outZipPaths.clear();
outZipPaths.add(appDir);

// Do not load all available splits if the app requested isolated split loading.
if (aInfo.splitSourceDirs != null && !aInfo.requestsIsolatedSplitLoading()) {
Collections.addAll(outZipPaths, aInfo.splitSourceDirs);
}

if (outLibPaths != null) {
outLibPaths.clear();
}

String[] instrumentationLibs = null;
if (activityThread != null) {
String instrumentationPackageName = activityThread.mInstrumentationPackageName;
String instrumentationAppDir = activityThread.mInstrumentationAppDir;
String[] instrumentationSplitAppDirs = activityThread.mInstrumentationSplitAppDirs;
String instrumentationLibDir = activityThread.mInstrumentationLibDir;

String instrumentedAppDir = activityThread.mInstrumentedAppDir;
String[] instrumentedSplitAppDirs = activityThread.mInstrumentedSplitAppDirs;
String instrumentedLibDir = activityThread.mInstrumentedLibDir;

if (appDir.equals(instrumentationAppDir)
|| appDir.equals(instrumentedAppDir)) {
outZipPaths.clear();
outZipPaths.add(instrumentationAppDir);
if (!instrumentationAppDir.equals(instrumentedAppDir)) {
outZipPaths.add(instrumentedAppDir);
}

// Only add splits if the app did not request isolated split loading.
if (!aInfo.requestsIsolatedSplitLoading()) {
if (instrumentationSplitAppDirs != null) {
Collections.addAll(outZipPaths, instrumentationSplitAppDirs);
}

if (!instrumentationAppDir.equals(instrumentedAppDir)) {
if (instrumentedSplitAppDirs != null) {
Collections.addAll(outZipPaths, instrumentedSplitAppDirs);
}
}
}

if (outLibPaths != null) {
outLibPaths.add(instrumentationLibDir);
if (!instrumentationLibDir.equals(instrumentedLibDir)) {
outLibPaths.add(instrumentedLibDir);
}
}

if (!instrumentedAppDir.equals(instrumentationAppDir)) {
instrumentationLibs = getLibrariesFor(instrumentationPackageName);
}
}
}

if (outLibPaths != null) {
if (outLibPaths.isEmpty()) {
outLibPaths.add(libDir);
}

// Add path to libraries in apk for current abi. Do this now because more entries
// will be added to zipPaths that shouldn't be part of the library path.
if (aInfo.primaryCpuAbi != null) {
// Add fake libs into the library search path if we target prior to N.
if (aInfo.targetSdkVersion < Build.VERSION_CODES.N) {
outLibPaths.add("/system/fake-libs" +
(VMRuntime.is64BitAbi(aInfo.primaryCpuAbi) ? "64" : ""));
}
for (String apk : outZipPaths) {
outLibPaths.add(apk + "!/lib/" + aInfo.primaryCpuAbi);
}
}

if (isBundledApp) {
// Add path to system libraries to libPaths;
// Access to system libs should be limited
// to bundled applications; this is why updated
// system apps are not included.
outLibPaths.add(System.getProperty("java.library.path"));
}
}

// Add the shared libraries native paths. The dex files in shared libraries will
// be resolved through shared library loaders, which are setup later.
Set<String> outSeenPaths = new LinkedHashSet<>();
appendSharedLibrariesLibPathsIfNeeded(
aInfo.sharedLibraryInfos, aInfo, outSeenPaths, outLibPaths);

// ApplicationInfo.sharedLibraryFiles is a public API, so anyone can change it.
// We prepend shared libraries that the package manager hasn't seen, maintaining their
// original order where possible.
if (aInfo.sharedLibraryFiles != null) {
int index = 0;
for (String lib : aInfo.sharedLibraryFiles) {
// sharedLibraryFiles might contain native shared libraries that are not APK paths.
if (!lib.endsWith(".apk")) {
continue;
}
if (!outSeenPaths.contains(lib) && !outZipPaths.contains(lib)) {
outZipPaths.add(index, lib);
index++;
appendApkLibPathIfNeeded(lib, aInfo, outLibPaths);
}
}
}

if (instrumentationLibs != null) {
for (String lib : instrumentationLibs) {
if (!outZipPaths.contains(lib)) {
outZipPaths.add(0, lib);
appendApkLibPathIfNeeded(lib, aInfo, outLibPaths);
}
}
}
}

// LoadedApk#appendApkLibPathIfNeeded
private static void appendApkLibPathIfNeeded(@NonNull String path,
@NonNull ApplicationInfo applicationInfo, @Nullable List<String> outLibPaths) {
// Looking at the suffix is a little hacky but a safe and simple solution.
// We will be revisiting code in the next release and clean this up.
if (outLibPaths != null && applicationInfo.primaryCpuAbi != null && path.endsWith(".apk")) {
if (applicationInfo.targetSdkVersion >= Build.VERSION_CODES.O) {
outLibPaths.add(path + "!/lib/" + applicationInfo.primaryCpuAbi);
}
}
}

ClassLoader.findLibrary will search the paths of native library and return the full path of your library.
Many candidate paths will be genarated when App was loaded, as below:

  1. System.getProperty(“java.library.path”), return System default library paths, for example, “/system/lib:/vendor/lib:/product/lib”
  2. apkPath + “!/lib/“ + primaryCpuAbi, for example, “/data/app/[package_name]!/lib/armeabli-v7a”
  3. SystemProperties.get(“ro.dalvik.vm.isa.” + secondaryCpuAbi), for example, “/data/app/[package_name]/lib/arm”

Start Zygote and JVM

We already know how dynamic is loaded, if you look at the calling code, you will find that Java loads dynamic library by calling Runtime_nativeLoad which is in Runtime.c file.
Runtime_nativeLoad is a native interface, who loads the dynamic library which contains Runtime_nativeLoad function?

Let’s firstly find out that which library contains Runtime.c, we find it is in libopenjdk.so by search bp file.

\libcore\ojluni\src\main\native\Android.bp

1
2
3
4
5
6
7
8
9
10
filegroup {
name: "libopenjdk_native_srcs",
...
srcs: [
...
"Runtime.c",
...
],
}

\libcore\NativeCode.bp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
cc_defaults {
name: "libopenjdk_native_defaults",
defaults: [
"core_native_default_flags",
"core_native_default_libs",
],
srcs: [":libopenjdk_native_srcs"],
...
}
...
cc_library_shared {
name: "libopenjdk",
visibility: [
"//art/build/apex",
],
apex_available: [
"com.android.art",
"com.android.art.debug",
],
defaults: ["libopenjdk_native_defaults"],
shared_libs: [
"libopenjdkjvm",
],
}

Then, we should find where app load libopenjdk.so, let’s take a look at the calling stack of starting JVM, you will find the answer.

1. Start Zygote

\system\core\init\main.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// init process' code
int main(int argc, char** argv) {
...
if (argc > 1) {
if (!strcmp(argv[1], "subcontext")) {
android::base::InitLogging(argv, &android::base::KernelLogger);
const BuiltinFunctionMap& function_map = GetBuiltinFunctionMap();

return SubcontextMain(argc, argv, &function_map);
}

if (!strcmp(argv[1], "selinux_setup")) {
return SetupSelinux(argv);
}

if (!strcmp(argv[1], "second_stage")) {
return SecondStageMain(argc, argv);
}
}
...
}

\system\core\init\init.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
int SecondStageMain(int argc, char** argv) {
...
LoadBootScripts(am, sm);
...
}
static void LoadBootScripts(ActionManager& action_manager, ServiceList& service_list) {
Parser parser = CreateParser(action_manager, service_list);

std::string bootscript = GetProperty("ro.boot.init_rc", "");
if (bootscript.empty()) {
parser.ParseConfig("/system/etc/init/hw/init.rc");
if (!parser.ParseConfig("/system/etc/init")) {
late_import_paths.emplace_back("/system/etc/init");
}
// late_import is available only in Q and earlier release. As we don't
// have system_ext in those versions, skip late_import for system_ext.
parser.ParseConfig("/system_ext/etc/init");
if (!parser.ParseConfig("/vendor/etc/init")) {
late_import_paths.emplace_back("/vendor/etc/init");
}
if (!parser.ParseConfig("/odm/etc/init")) {
late_import_paths.emplace_back("/odm/etc/init");
}
if (!parser.ParseConfig("/product/etc/init")) {
late_import_paths.emplace_back("/product/etc/init");
}
} else {
parser.ParseConfig(bootscript);
}
}

\system\core\rootdir\init.rc

1
2
3
4
5
6
7
import /init.environ.rc
import /system/etc/init/hw/init.usb.rc
import /init.${ro.hardware}.rc
import /vendor/etc/init/hw/init.${ro.hardware}.rc
import /system/etc/init/hw/init.usb.configfs.rc
import /system/etc/init/hw/init.${ro.zygote}.rc
....

\system\core\rootdir\init.zygote64.rc

1
2
service zygote /system/bin/app_process64 -Xzygote /system/bin --zygote --start-system-server --socket-name=zygote
...

\frameworks\base\cmds\app_process\app_main.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
// Zygote process' code
int main(int argc, char* const argv[])
{
...
AppRuntime runtime(argv[0], computeArgBlockSize(argc, argv));
...
// Parse runtime arguments. Stop at first unrecognized option.
bool zygote = false;
bool startSystemServer = false;
bool application = false;
String8 niceName;
String8 className;

++i; // Skip unused "parent dir" argument.
while (i < argc) {
const char* arg = argv[i++];
if (strcmp(arg, "--zygote") == 0) {
zygote = true;
niceName = ZYGOTE_NICE_NAME;
} else if (strcmp(arg, "--start-system-server") == 0) {
startSystemServer = true;
} else if (strcmp(arg, "--application") == 0) {
application = true;
} else if (strncmp(arg, "--nice-name=", 12) == 0) {
niceName.setTo(arg + 12);
} else if (strncmp(arg, "--", 2) != 0) {
className.setTo(arg);
break;
} else {
--i;
break;
}
}

....

if (zygote) {
runtime.start("com.android.internal.os.ZygoteInit", args, zygote);
} else if (!className.isEmpty()) {
runtime.start("com.android.internal.os.RuntimeInit", args, zygote);
} else {
fprintf(stderr, "Error: no class name or --zygote supplied.\n");
app_usage();
LOG_ALWAYS_FATAL("app_process: no class name or --zygote supplied.");
}
}

class AppRuntime : public AndroidRuntime
{
public:
AppRuntime(char* argBlockStart, const size_t argBlockLength)
: AndroidRuntime(argBlockStart, argBlockLength)
, mClass(NULL)
{
}
...
}

\frameworks\base\core\jni\AndroidRuntime.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
void AndroidRuntime::start(const char* className, const Vector<String8>& options, bool zygote)
{
...

/* start the virtual machine */
JniInvocation jni_invocation;
jni_invocation.Init(NULL);
JNIEnv* env;
if (startVm(&mJavaVM, &env, zygote, primary_zygote) != 0) {
return;
}
onVmCreated(env);

/*
* Register android functions.
*/
if (startReg(env) < 0) {
ALOGE("Unable to register all android natives\n");
return;
}

...

/*
* Start VM. This thread becomes the main thread of the VM, and will
* not return until the VM exits.
*/
char* slashClassName = toSlashClassName(className != NULL ? className : "");
jclass startClass = env->FindClass(slashClassName);
if (startClass == NULL) {
ALOGE("JavaVM unable to locate class '%s'\n", slashClassName);
/* keep going */
} else {
jmethodID startMeth = env->GetStaticMethodID(startClass, "main",
"([Ljava/lang/String;)V");
if (startMeth == NULL) {
ALOGE("JavaVM unable to find main() in '%s'\n", className);
/* keep going */
} else {
env->CallStaticVoidMethod(startClass, startMeth, strArray);

#if 0
if (env->ExceptionCheck())
threadExitUncaughtException(env);
#endif
}
}
free(slashClassName);

ALOGD("Shutting down VM\n");
if (mJavaVM->DetachCurrentThread() != JNI_OK)
ALOGW("Warning: unable to detach main thread\n");
if (mJavaVM->DestroyJavaVM() != 0)
ALOGW("Warning: VM did not shut down cleanly\n");
}

2. Start JVM

\frameworks\base\core\jni\AndroidRuntime.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
int AndroidRuntime::startVm(JavaVM** pJavaVM, JNIEnv** pEnv, bool zygote, bool primary_zygote)
{
JavaVMInitArgs initArgs;
...
initArgs.version = JNI_VERSION_1_4;
initArgs.options = mOptions.editArray();
initArgs.nOptions = mOptions.size();
initArgs.ignoreUnrecognized = JNI_FALSE;

/*
* Initialize the VM.
*
* The JavaVM* is essentially per-process, and the JNIEnv* is per-thread.
* If this call succeeds, the VM is ready, and we can start issuing
* JNI calls.
*/
if (JNI_CreateJavaVM(pJavaVM, pEnv, &initArgs) < 0) {
ALOGE("JNI_CreateJavaVM failed\n");
return -1;
}

return 0;
}

\libnativehelper\JniInvocation.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
static const char* kDefaultJniInvocationLibrary = "libart.so";
...
struct JniInvocationImpl {
// Name of library providing JNI_ method implementations.
const char* jni_provider_library_name;

// Opaque pointer to shared library from dlopen / LoadLibrary.
void* jni_provider_library;

// Function pointers to methods in JNI provider.
jint (*JNI_GetDefaultJavaVMInitArgs)(void*);
jint (*JNI_CreateJavaVM)(JavaVM**, JNIEnv**, void*);
jint (*JNI_GetCreatedJavaVMs)(JavaVM**, jsize, jsize*);
};

static struct JniInvocationImpl g_impl;
...
jint JNI_CreateJavaVM(JavaVM** p_vm, JNIEnv** p_env, void* vm_args) {
ALOG_ALWAYS_FATAL_IF(NULL == g_impl.JNI_CreateJavaVM, "Runtime library not loaded.");
return g_impl.JNI_CreateJavaVM(p_vm, p_env, vm_args);
}
...
bool JniInvocationInit(struct JniInvocationImpl* instance, const char* library_name) {
...
library_name = kDefaultJniInvocationLibrary;
library = DlOpenLibrary(library_name);
if (library == NULL) {
ALOGE("Failed to dlopen %s: %s", library_name, DlGetError());
return false;
}
}

DlSymbol JNI_GetDefaultJavaVMInitArgs_ = FindSymbol(library, "JNI_GetDefaultJavaVMInitArgs");
if (JNI_GetDefaultJavaVMInitArgs_ == NULL) {
return false;
}

DlSymbol JNI_CreateJavaVM_ = FindSymbol(library, "JNI_CreateJavaVM");
if (JNI_CreateJavaVM_ == NULL) {
return false;
}
...
}

\art\runtime\jni\java_vm_ext.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
extern "C" jint JNI_CreateJavaVM(JavaVM** p_vm, JNIEnv** p_env, void* vm_args) {
ScopedTrace trace(__FUNCTION__);
const JavaVMInitArgs* args = static_cast<JavaVMInitArgs*>(vm_args);
if (JavaVMExt::IsBadJniVersion(args->version)) {
LOG(ERROR) << "Bad JNI version passed to CreateJavaVM: " << args->version;
return JNI_EVERSION;
}
RuntimeOptions options;
for (int i = 0; i < args->nOptions; ++i) {
JavaVMOption* option = &args->options[i];
options.push_back(std::make_pair(std::string(option->optionString), option->extraInfo));
}
bool ignore_unrecognized = args->ignoreUnrecognized;
if (!Runtime::Create(options, ignore_unrecognized)) {
return JNI_ERR;
}

// When `ART_CRASH_RUNTIME_DELIBERATELY` is defined (which happens only in the
// case of a test APEX), we crash the runtime here on purpose, to test the
// behavior of rollbacks following a failed ART Mainline Module update.
#ifdef ART_CRASH_RUNTIME_DELIBERATELY
LOG(FATAL) << "Runtime crashing deliberately for testing purposes.";
#endif

// Initialize native loader. This step makes sure we have
// everything set up before we start using JNI.
android::InitializeNativeLoader();

Runtime* runtime = Runtime::Current();
bool started = runtime->Start();
if (!started) {
delete Thread::Current()->GetJniEnv();
delete runtime->GetJavaVM();
LOG(WARNING) << "CreateJavaVM failed";
return JNI_ERR;
}

*p_env = Thread::Current()->GetJniEnv();
*p_vm = runtime->GetJavaVM();
return JNI_OK;
}

\art\runtime\runtime.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
bool Runtime::Start() {
...

// Restore main thread state to kNative as expected by native code.
Thread* self = Thread::Current();

started_ = true;

class_linker_->RunEarlyRootClinits(self);
InitializeIntrinsics();

self->TransitionFromRunnableToSuspended(ThreadState::kNative);

// InitNativeMethods needs to be after started_ so that the classes
// it touches will have methods linked to the oat file if necessary.
{
ScopedTrace trace2("InitNativeMethods");
InitNativeMethods();
}
...
}

void Runtime::InitNativeMethods() {
VLOG(startup) << "Runtime::InitNativeMethods entering";
Thread* self = Thread::Current();
JNIEnv* env = self->GetJniEnv();

// Must be in the kNative state for calling native methods (JNI_OnLoad code).
CHECK_EQ(self->GetState(), ThreadState::kNative);

// Set up the native methods provided by the runtime itself.
RegisterRuntimeNativeMethods(env);

// Initialize classes used in JNI. The initialization requires runtime native
// methods to be loaded first.
WellKnownClasses::Init(env);

// Then set up libjavacore / libopenjdk / libicu_jni ,which are just
// a regular JNI libraries with a regular JNI_OnLoad. Most JNI libraries can
// just use System.loadLibrary, but libcore can't because it's the library
// that implements System.loadLibrary!
//
// By setting calling class to java.lang.Object, the caller location for these
// JNI libs is core-oj.jar in the ART APEX, and hence they are loaded from the
// com_android_art linker namespace.

// libicu_jni has to be initialized before libopenjdk{d} due to runtime dependency from
// libopenjdk{d} to Icu4cMetadata native methods in libicu_jni. See http://b/143888405
{
std::string error_msg;
if (!java_vm_->LoadNativeLibrary(
env, "libicu_jni.so", nullptr, WellKnownClasses::java_lang_Object, &error_msg)) {
LOG(FATAL) << "LoadNativeLibrary failed for \"libicu_jni.so\": " << error_msg;
}
}
{
std::string error_msg;
if (!java_vm_->LoadNativeLibrary(
env, "libjavacore.so", nullptr, WellKnownClasses::java_lang_Object, &error_msg)) {
LOG(FATAL) << "LoadNativeLibrary failed for \"libjavacore.so\": " << error_msg;
}
}
{
constexpr const char* kOpenJdkLibrary = kIsDebugBuild
? "libopenjdkd.so"
: "libopenjdk.so";
std::string error_msg;
if (!java_vm_->LoadNativeLibrary(
env, kOpenJdkLibrary, nullptr, WellKnownClasses::java_lang_Object, &error_msg)) {
LOG(FATAL) << "LoadNativeLibrary failed for \"" << kOpenJdkLibrary << "\": " << error_msg;
}
}

// Initialize well known classes that may invoke runtime native methods.
WellKnownClasses::LateInit(env);

VLOG(startup) << "Runtime::InitNativeMethods exiting";
}

Now we find Zygote call system API to load libart.so which implements JVM and then call libart.so::JNI_CreateJavaVM to create JVM and call Runtime::Start to start ART. In Runtime::Start function, libopenjdk.so is loaded by calling JavaVMExt::LoadNativeLibrary, and JavaVMExt::LoadNativeLibrary will invoke JNI_OnLoad of libopenjdk.so, JNI_OnLoad will invode JNIEnv::RegisterNatives to register JNI functions.

\libcore\ojluni\src\main\native\OnLoad.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
//JNI_OnLoad of libopenjdk.so
extern "C" JNIEXPORT jint JNI_OnLoad(JavaVM* vm, void*) {
...
register_java_lang_Float(env);
register_java_lang_Double(env);
register_java_lang_System(env);

// Initialize the rest in the order in which they appear in Android.bp .
register_java_util_zip_ZipFile(env);
register_java_util_zip_Inflater(env);
register_java_util_zip_Deflater(env);
register_java_io_FileDescriptor(env);
...
register_java_lang_Runtime(env);
...
};

\libcore\ojluni\src\main\native\Runtime.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14
static JNINativeMethod gMethods[] = {
FAST_NATIVE_METHOD(Runtime, freeMemory, "()J"),
FAST_NATIVE_METHOD(Runtime, totalMemory, "()J"),
FAST_NATIVE_METHOD(Runtime, maxMemory, "()J"),
NATIVE_METHOD(Runtime, nativeGc, "()V"),
NATIVE_METHOD(Runtime, nativeExit, "(I)V"),
NATIVE_METHOD(Runtime, nativeLoad,
"(Ljava/lang/String;Ljava/lang/ClassLoader;Ljava/lang/Class;)"
"Ljava/lang/String;"),
};

void register_java_lang_Runtime(JNIEnv* env) {
jniRegisterNativeMethods(env, "java/lang/Runtime", gMethods, NELEM(gMethods));
}

\libnativehelper\JNIHelp.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
int jniRegisterNativeMethods(JNIEnv* env, const char* className,
const JNINativeMethod* methods, int numMethods)
{
ALOGV("Registering %s's %d native methods...", className, numMethods);
jclass clazz = (*env)->FindClass(env, className);
ALOG_ALWAYS_FATAL_IF(clazz == NULL,
"Native registration unable to find class '%s'; aborting...",
className);
int result = (*env)->RegisterNatives(env, clazz, methods, numMethods);
(*env)->DeleteLocalRef(env, clazz);
if (result == 0) {
return 0;
}
...
}

3. Register JNI functions of Framework

In AndroidRuntime::start function, in addtion to starting JVM, android framework functions are registered by calling AndroidRuntime::startReg. Native C++ code of android framework is in libandroid_runtime.so and java code is in framework.jar and framework2.jar, Zygote relies on libandroid_runtime.so, but unlike other dynamic libraries, JNI functions registering of libandroid_runtime.so is not in JNI_OnLoad function. AndroidRuntime::startReg will invoke JNIEnv::RegisterNatives at last. Code Tracing as below:

\frameworks\base\core\jni\AndroidRuntime.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
int AndroidRuntime::startReg(JNIEnv* env)
{
...
if (register_jni_procs(gRegJNI, NELEM(gRegJNI), env) < 0) {
env->PopLocalFrame(NULL);
return -1;
}
...
return 0;
}

static const RegJNIRec gRegJNI[] = {
REG_JNI(register_com_android_internal_os_RuntimeInit),
REG_JNI(register_com_android_internal_os_ZygoteInit_nativeZygoteInit),
REG_JNI(register_android_os_SystemClock),
REG_JNI(register_android_util_CharsetUtils),
REG_JNI(register_android_util_EventLog),
...
};

static int register_jni_procs(const RegJNIRec array[], size_t count, JNIEnv* env)
{
for (size_t i = 0; i < count; i++) {
if (array[i].mProc(env) < 0) {
#ifndef NDEBUG
ALOGD("----------!!! %s failed to load\n", array[i].mName);
#endif
return -1;
}
}
return 0;
}

int register_com_android_internal_os_RuntimeInit(JNIEnv* env)
{
const JNINativeMethod methods[] = {
{"nativeFinishInit", "()V",
(void*)com_android_internal_os_RuntimeInit_nativeFinishInit},
{"nativeSetExitWithoutCleanup", "(Z)V",
(void*)com_android_internal_os_RuntimeInit_nativeSetExitWithoutCleanup},
};
return jniRegisterNativeMethods(env, "com/android/internal/os/RuntimeInit",
methods, NELEM(methods));
}
...

Class Loader

Java classes are loaded to ART by ClassLoader of JRE(java Runtime Environment). There are three types of ClassLoader called Bootstrap Class Loader, Extension Class Loader, and Application Class Loader.

BootstrapClassLoader which written by C++ code is in ART, is responsible for loading the base class libraries, such as rt.jar, charset.jar etc., this Class Loader cannot be used directly by App. You can change class searching path by setting -bootclasspath parameter, or class will be loaded in JAVA_HOME/lib path.

ExtensionClassLoader and ApplicationClassLoader are written by java code and independent of ART, ExtensionClassLoader’s father class loader is BootstrapClassLoader, ApplicationClassLoader’s father class loader is ExtensionClassLoader. ExtensionClassLoader’s loading path is JAVA_HOME/lib/ext, which stored extension class libraries. ApplicationClassLoader’s loading path is CLASSPATH, which stored appilication’s java class libraries.

Class loading has two steps:

  1. Check whether the class has been loaded, if it does, return loaded class immediately, searching direction is bottom to top.
  2. If class has not been loaded, bottom ClassLoader will delegate his father ClassLoader to load class, so it is always topmost ClassLoader try to load firstly, in top to bottom order.

The Benefits:

  1. Prevents classes from being loaded repeatedly.
  2. Prevents core classes from being tampered with.

\libcore\ojluni\src\main\java\java\lang\ClassLoader.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
protected Class<?> loadClass(String name, boolean resolve)
throws ClassNotFoundException
{
// First, check if the class has already been loaded
Class<?> c = findLoadedClass(name);
if (c == null) {
try {
if (parent != null) {
c = parent.loadClass(name, false);
} else {
c = findBootstrapClassOrNull(name);
}
} catch (ClassNotFoundException e) {
// ClassNotFoundException thrown if class not found
// from the non-null parent class loader
}

if (c == null) {
// If still not found, then invoke findClass in order
// to find the class.
c = findClass(name);
}
}
return c;
}

Statically register JNI functions

The first call to the Java native function triggers static registration if JNI functions haven’t been registerd dynamically. ART will assign JNI function address which exposed by dynamic library’s symbol table to JNI entry point of ARTMethod. After Class has been loaded by ClassLoader, ClassLinker helps to link ARTMethod’s entry point. Code tracing as below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
//Dynamic loading class and calling method by reflection
...
Class<?> clazz = null;
Object instance = null;
try {
clazz = Class.forName("com.example.android.simplejni.SimpleJNI");
instance = clazz.newInstance();

Method method = clazz.getMethod("add", String.class);
int a = 1;
int b = 2;
int sum = method.invoke(instance, a, b);
}
...

\libcore\ojluni\src\main\java\java\lang\Class.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
@CallerSensitive
public static Class<?> forName(String className)
throws ClassNotFoundException {
Class<?> caller = Reflection.getCallerClass();
return forName(className, true, ClassLoader.getClassLoader(caller));
}

@CallerSensitive
public static Class<?> forName(String name, boolean initialize,
ClassLoader loader)
throws ClassNotFoundException
{
if (loader == null) {
loader = BootClassLoader.getInstance();
}
Class<?> result;
try {
result = classForName(name, initialize, loader);
} catch (ClassNotFoundException e) {
Throwable cause = e.getCause();
if (cause instanceof LinkageError) {
throw (LinkageError) cause;
}
throw e;
}
return result;
}

/** Called after security checks have been made. */
@FastNative
static native Class<?> classForName(String className, boolean shouldInitialize,
ClassLoader classLoader) throws ClassNotFoundException;

\art\runtime\native\java_lang_Class.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
static jclass Class_classForName(JNIEnv* env, jclass, jstring javaName, jboolean initialize,
jobject javaLoader) {
ScopedFastNativeObjectAccess soa(env);
ScopedUtfChars name(env, javaName);
if (name.c_str() == nullptr) {
return nullptr;
}

// We need to validate and convert the name (from x.y.z to x/y/z). This
// is especially handy for array types, since we want to avoid
// auto-generating bogus array classes.
if (!IsValidBinaryClassName(name.c_str())) {
soa.Self()->ThrowNewExceptionF("Ljava/lang/ClassNotFoundException;",
"Invalid name: %s", name.c_str());
return nullptr;
}

std::string descriptor(DotToDescriptor(name.c_str()));
StackHandleScope<2> hs(soa.Self());
Handle<mirror::ClassLoader> class_loader(
hs.NewHandle(soa.Decode<mirror::ClassLoader>(javaLoader)));
ClassLinker* class_linker = Runtime::Current()->GetClassLinker();
Handle<mirror::Class> c(
hs.NewHandle(class_linker->FindClass(soa.Self(), descriptor.c_str(), class_loader)));
if (c == nullptr) {
ScopedLocalRef<jthrowable> cause(env, env->ExceptionOccurred());
env->ExceptionClear();
jthrowable cnfe = reinterpret_cast<jthrowable>(
env->NewObject(WellKnownClasses::java_lang_ClassNotFoundException,
WellKnownClasses::java_lang_ClassNotFoundException_init,
javaName,
cause.get()));
if (cnfe != nullptr) {
// Make sure allocation didn't fail with an OOME.
env->Throw(cnfe);
}
return nullptr;
}
if (initialize) {
class_linker->EnsureInitialized(soa.Self(), c, true, true);
}
return soa.AddLocalReference<jclass>(c.Get());
}

\art\runtime\class_linker.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
ObjPtr<mirror::Class> ClassLinker::FindClass(Thread* self,
const char* descriptor,
Handle<mirror::ClassLoader> class_loader) {
...
// Class is not yet loaded.
if (descriptor[0] != '[' && class_loader == nullptr) {
// Non-array class and the boot class loader, search the boot class path.
ClassPathEntry pair = FindInClassPath(descriptor, hash, boot_class_path_);
if (pair.second != nullptr) {
return DefineClass(self,
descriptor,
hash,
ScopedNullHandle<mirror::ClassLoader>(),
*pair.first,
*pair.second);
} else {
...
}
}
...
}

ObjPtr<mirror::Class> ClassLinker::DefineClass(Thread* self,
const char* descriptor,
size_t hash,
Handle<mirror::ClassLoader> class_loader,
const DexFile& dex_file,
const dex::ClassDef& dex_class_def) {
...
// Get the real dex file. This will return the input if there aren't any callbacks or they do
// nothing.
DexFile const* new_dex_file = nullptr;
dex::ClassDef const* new_class_def = nullptr;
// TODO We should ideally figure out some way to move this after we get a lock on the klass so it
// will only be called once.
Runtime::Current()->GetRuntimeCallbacks()->ClassPreDefine(descriptor,
klass,
class_loader,
dex_file,
dex_class_def,
&new_dex_file,
&new_class_def);
// Check to see if an exception happened during runtime callbacks. Return if so.
if (self->IsExceptionPending()) {
return sdc.Finish(nullptr);
}
ObjPtr<mirror::DexCache> dex_cache = RegisterDexFile(*new_dex_file, class_loader.Get());
if (dex_cache == nullptr) {
self->AssertPendingException();
return sdc.Finish(nullptr);
}
klass->SetDexCache(dex_cache);
SetupClass(*new_dex_file, *new_class_def, klass, class_loader.Get());

// Mark the string class by setting its access flag.
if (UNLIKELY(!init_done_)) {
if (strcmp(descriptor, "Ljava/lang/String;") == 0) {
klass->SetStringClass();
}
}

ObjectLock<mirror::Class> lock(self, klass);
klass->SetClinitThreadId(self->GetTid());
// Make sure we have a valid empty iftable even if there are errors.
klass->SetIfTable(GetClassRoot<mirror::Object>(this)->GetIfTable());

// Add the newly loaded class to the loaded classes table.
ObjPtr<mirror::Class> existing = InsertClass(descriptor, klass.Get(), hash);
if (existing != nullptr) {
// We failed to insert because we raced with another thread. Calling EnsureResolved may cause
// this thread to block.
return sdc.Finish(EnsureResolved(self, descriptor, existing));
}

// Load the fields and other things after we are inserted in the table. This is so that we don't
// end up allocating unfree-able linear alloc resources and then lose the race condition. The
// other reason is that the field roots are only visited from the class table. So we need to be
// inserted before we allocate / fill in these fields.
LoadClass(self, *new_dex_file, *new_class_def, klass);
...

return sdc.Finish(h_new_class);
}


void ClassLinker::LoadClass(Thread* self,
const DexFile& dex_file,
const dex::ClassDef& dex_class_def,
Handle<mirror::Class> klass) {
ClassAccessor accessor(dex_file,
dex_class_def,
/* parse_hiddenapi_class_data= */ klass->IsBootStrapClassLoaded());
if (!accessor.HasClassData()) {
return;
}
Runtime* const runtime = Runtime::Current();
{
...
accessor.VisitFieldsAndMethods([&](
const ClassAccessor::Field& field) REQUIRES_SHARED(Locks::mutator_lock_) {
uint32_t field_idx = field.GetIndex();
DCHECK_GE(field_idx, last_static_field_idx); // Ordering enforced by DexFileVerifier.
if (num_sfields == 0 || LIKELY(field_idx > last_static_field_idx)) {
LoadField(field, klass, &sfields->At(num_sfields));
++num_sfields;
last_static_field_idx = field_idx;
}
}, [&](const ClassAccessor::Field& field) REQUIRES_SHARED(Locks::mutator_lock_) {
uint32_t field_idx = field.GetIndex();
DCHECK_GE(field_idx, last_instance_field_idx); // Ordering enforced by DexFileVerifier.
if (num_ifields == 0 || LIKELY(field_idx > last_instance_field_idx)) {
LoadField(field, klass, &ifields->At(num_ifields));
++num_ifields;
last_instance_field_idx = field_idx;
}
}, [&](const ClassAccessor::Method& method) REQUIRES_SHARED(Locks::mutator_lock_) {
ArtMethod* art_method = klass->GetDirectMethodUnchecked(class_def_method_index,
image_pointer_size_);
LoadMethod(dex_file, method, klass.Get(), art_method);
LinkCode(this, art_method, oat_class_ptr, class_def_method_index);
uint32_t it_method_index = method.GetIndex();
if (last_dex_method_index == it_method_index) {
// duplicate case
art_method->SetMethodIndex(last_class_def_method_index);
} else {
art_method->SetMethodIndex(class_def_method_index);
last_dex_method_index = it_method_index;
last_class_def_method_index = class_def_method_index;
}
art_method->ResetCounter(hotness_threshold);
++class_def_method_index;
}, [&](const ClassAccessor::Method& method) REQUIRES_SHARED(Locks::mutator_lock_) {
ArtMethod* art_method = klass->GetVirtualMethodUnchecked(
class_def_method_index - accessor.NumDirectMethods(),
image_pointer_size_);
art_method->ResetCounter(hotness_threshold);
LoadMethod(dex_file, method, klass.Get(), art_method);
LinkCode(this, art_method, oat_class_ptr, class_def_method_index);
++class_def_method_index;
});
...
}
...
}


static void LinkCode(ClassLinker* class_linker,
ArtMethod* method,
const OatFile::OatClass* oat_class,
uint32_t class_def_method_index) REQUIRES_SHARED(Locks::mutator_lock_) {
...

if (method->IsNative()) {
// Set up the dlsym lookup stub. Do not go through `UnregisterNative()`
// as the extra processing for @CriticalNative is not needed yet.
method->SetEntryPointFromJni(
method->IsCriticalNative() ? GetJniDlsymLookupCriticalStub() : GetJniDlsymLookupStub());
}
}

\art\runtime\entrypoints\runtime_asm_entrypoints.h

1
2
3
static inline const void* GetJniDlsymLookupStub() {
return reinterpret_cast<const void*>(art_jni_dlsym_lookup_stub);
}

\art\runtime\arch\arm64\jni_entrypoints_arm64.S

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
ENTRY art_jni_dlsym_lookup_stub
// spill regs.
SAVE_ALL_ARGS_INCREASE_FRAME 2 * 8
stp x29, x30, [sp, ALL_ARGS_SIZE]
.cfi_rel_offset x29, ALL_ARGS_SIZE
.cfi_rel_offset x30, ALL_ARGS_SIZE + 8
add x29, sp, ALL_ARGS_SIZE

mov x0, xSELF // pass Thread::Current()
// Call artFindNativeMethod() for normal native and artFindNativeMethodRunnable()
// for @FastNative or @CriticalNative.
ldr xIP0, [x0, #THREAD_TOP_QUICK_FRAME_OFFSET] // uintptr_t tagged_quick_frame
bic xIP0, xIP0, #TAGGED_JNI_SP_MASK // ArtMethod** sp
ldr xIP0, [xIP0] // ArtMethod* method
ldr xIP0, [xIP0, #ART_METHOD_ACCESS_FLAGS_OFFSET] // uint32_t access_flags
mov xIP1, #(ACCESS_FLAGS_METHOD_IS_FAST_NATIVE | ACCESS_FLAGS_METHOD_IS_CRITICAL_NATIVE)
tst xIP0, xIP1
b.ne .Llookup_stub_fast_or_critical_native
bl artFindNativeMethod
b .Llookup_stub_continue
.Llookup_stub_fast_or_critical_native:
bl artFindNativeMethodRunnable

\art\runtime\entrypoints\jni\jni_entrypoints.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
// Used by the JNI dlsym stub to find the native method to invoke if none is registered.
extern "C" const void* artFindNativeMethodRunnable(Thread* self)
REQUIRES_SHARED(Locks::mutator_lock_) {
...

// Check whether we already have a registered native code.
// For @CriticalNative it may not be stored in the ArtMethod as a JNI entrypoint if the class
// was not visibly initialized yet. Do this check also for @FastNative and normal native for
// consistency; though success would mean that another thread raced to do this lookup.
const void* native_code = class_linker->GetRegisteredNative(self, method);
if (native_code != nullptr) {
return native_code;
}

// Lookup symbol address for method, on failure we'll return null with an exception set,
// otherwise we return the address of the method we found.
JavaVMExt* vm = down_cast<JNIEnvExt*>(self->GetJniEnv())->GetVm();
std::string error_msg;
native_code = vm->FindCodeForNativeMethod(method, &error_msg, /*can_suspend=*/ true);
if (native_code == nullptr) {
LOG(ERROR) << error_msg;
self->ThrowNewException("Ljava/lang/UnsatisfiedLinkError;", error_msg.c_str());
return nullptr;
}

// Register the code. This usually prevents future calls from coming to this function again.
// We can still come here if the ClassLinker cannot set the entrypoint in the ArtMethod,
// i.e. for @CriticalNative methods with the declaring class not visibly initialized.
return class_linker->RegisterNative(self, method, native_code);
}

\art\runtime\class_linker.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
const void* ClassLinker::RegisterNative(
Thread* self, ArtMethod* method, const void* native_method) {
CHECK(method->IsNative()) << method->PrettyMethod();
CHECK(native_method != nullptr) << method->PrettyMethod();
void* new_native_method = nullptr;
Runtime* runtime = Runtime::Current();
runtime->GetRuntimeCallbacks()->RegisterNativeMethod(method,
native_method,
/*out*/&new_native_method);
if (method->IsCriticalNative()) {
MutexLock lock(self, critical_native_code_with_clinit_check_lock_);
// Remove old registered method if any.
auto it = critical_native_code_with_clinit_check_.find(method);
if (it != critical_native_code_with_clinit_check_.end()) {
critical_native_code_with_clinit_check_.erase(it);
}
// To ensure correct memory visibility, we need the class to be visibly
// initialized before we can set the JNI entrypoint.
if (method->GetDeclaringClass()->IsVisiblyInitialized()) {
method->SetEntryPointFromJni(new_native_method);
} else {
critical_native_code_with_clinit_check_.emplace(method, new_native_method);
}
} else {
method->SetEntryPointFromJni(new_native_method);
}
return new_native_method;
}

\art\runtime\art_method.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
void SetEntryPointFromJni(const void* entrypoint)
REQUIRES_SHARED(Locks::mutator_lock_) {
// The resolution method also has a JNI entrypoint for direct calls from
// compiled code to the JNI dlsym lookup stub for @CriticalNative.
DCHECK(IsNative() || IsRuntimeMethod());
SetEntryPointFromJniPtrSize(entrypoint, kRuntimePointerSize);
}

ALWAYS_INLINE void SetEntryPointFromJniPtrSize(const void* entrypoint, PointerSize pointer_size)
REQUIRES_SHARED(Locks::mutator_lock_) {
SetDataPtrSize(entrypoint, pointer_size);
}

ALWAYS_INLINE void SetDataPtrSize(const void* data, PointerSize pointer_size)
REQUIRES_SHARED(Locks::mutator_lock_) {
DCHECK(IsImagePointerSize(pointer_size));
SetNativePointer(DataOffset(pointer_size), data, pointer_size);
}

template<typename T>
ALWAYS_INLINE void SetNativePointer(MemberOffset offset, T new_value, PointerSize pointer_size)
REQUIRES_SHARED(Locks::mutator_lock_) {
static_assert(std::is_pointer<T>::value, "T must be a pointer type");
const auto addr = reinterpret_cast<uintptr_t>(this) + offset.Uint32Value();
if (pointer_size == PointerSize::k32) {
uintptr_t ptr = reinterpret_cast<uintptr_t>(new_value);
*reinterpret_cast<uint32_t*>(addr) = dchecked_integral_cast<uint32_t>(ptr);
} else {
*reinterpret_cast<uint64_t*>(addr) = reinterpret_cast<uintptr_t>(new_value);
}
}

\art\runtime\jni\java_vm_ext.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
void* JavaVMExt::FindCodeForNativeMethod(ArtMethod* m, std::string* error_msg, bool can_suspend) {
CHECK(m->IsNative());
ObjPtr<mirror::Class> c = m->GetDeclaringClass();
// If this is a static method, it could be called before the class has been initialized.
CHECK(c->IsInitializing() || !NeedsClinitCheckBeforeCall(m))
<< c->GetStatus() << " " << m->PrettyMethod();
Thread* const self = Thread::Current();
void* native_method = libraries_->FindNativeMethod(self, m, error_msg, can_suspend);
if (native_method == nullptr && can_suspend) {
// Lookup JNI native methods from native TI Agent libraries. See runtime/ti/agent.h for more
// information. Agent libraries are searched for native methods after all jni libraries.
native_method = FindCodeForNativeMethodInAgents(m);
}
return native_method;
}

static void* FindCodeForNativeMethodInAgents(ArtMethod* m) REQUIRES_SHARED(Locks::mutator_lock_) {
std::string jni_short_name(m->JniShortName());
std::string jni_long_name(m->JniLongName());
for (const std::unique_ptr<ti::Agent>& agent : Runtime::Current()->GetAgents()) {
void* fn = agent->FindSymbol(jni_short_name);
if (fn != nullptr) {
VLOG(jni) << "Found implementation for " << m->PrettyMethod()
<< " (symbol: " << jni_short_name << ") in " << *agent;
return fn;
}
fn = agent->FindSymbol(jni_long_name);
if (fn != nullptr) {
VLOG(jni) << "Found implementation for " << m->PrettyMethod()
<< " (symbol: " << jni_long_name << ") in " << *agent;
return fn;
}
}
return nullptr;
}

\art\runtime\ti\agent.cc

1
2
3
4
void* Agent::FindSymbol(const std::string& name) const {
CHECK(dlopen_handle_ != nullptr) << "Cannot find symbols in an unloaded agent library " << this;
return dlsym(dlopen_handle_, name.c_str());
}

ART VM looks first for the short name; that is, the name without the argument signature. It then looks for the long name, which is the name with the argument signature. Programmers need to use the long name only when a native method is overloaded with another native method. However, this is not a problem if the native method has the same name as a nonnative method. A nonnative method (a Java method) does not reside in the native library.

eg short name:
Com_example_android_simplejni_Native_add(JNIEnv *env, jobject thiz, jint a, jint b)

eg long name(apppend ‘__’ and argument signature):
Com_example_android_simplejni_Native_add__II(JNIEnv *env, jobject thiz, jint a, jint b)

Dynamically register JNI functions

Native code accesses Java VM features by calling JNI functions. JNI functions are available through an interface pointer. An interface pointer is a pointer to a pointer. This pointer points to an array of pointers, each of which points to an interface function. Every interface function is at a predefined offset inside the array.


Important: JavaVM::GetEnv will return NULL if current thread is native thread, you should Call JavaVM::AttachCurrentThread to attach current native thread to ART VM and return JNIEnv pointer. Why do we access JVM by JNIEnv instead of JavaVM directly? Personally speaking, ART designers of JNI, who do not want to maintenance thread-dependent variables, so they expose JNIEnv structure. In fact, they can do it themselves based on the thread ID.

\libnativehelper\include_jni\jni.h

1
2
3
4
5
typedef struct {
const char* name; // Native function name
const char* signature; // Argument signature of Native function
void* fnPtr; // Native function Pointer
} JNINativeMethod;

JNINativeMethod is a structure which contains the details of Java Native function, argument signature is to match arguments when a native method is overloaded.

\libnativehelper\include_jni\jni.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
struct JNINativeInterface {
...
jint (*GetVersion)(JNIEnv *);
jclass (*DefineClass)(JNIEnv*, const char*, jobject, const jbyte*, jsize);
jclass (*FindClass)(JNIEnv*, const char*);
...
jint (*RegisterNatives)(JNIEnv*, jclass, const JNINativeMethod*, jint);
jint (*UnregisterNatives)(JNIEnv*, jclass);
...
jint (*GetJavaVM)(JNIEnv*, JavaVM**);
...
};

typedef const struct JNINativeInterface* C_JNIEnv;

#if defined(__cplusplus)
typedef _JNIEnv JNIEnv;
...
#else
typedef const struct JNINativeInterface* JNIEnv;
...
#endif

struct _JNIEnv {
/* do not rename this; it does not seem to be entirely opaque */
const struct JNINativeInterface* functions;
...
jint RegisterNatives(jclass clazz, const JNINativeMethod* methods,
jint nMethods)
{ return functions->RegisterNatives(this, clazz, methods, nMethods); }
...
};

\art\runtime\jnijni_env_ext.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
class JNIEnvExt : public JNIEnv {
public:
// Creates a new JNIEnvExt. Returns null on error, in which case error_msg
// will contain a description of the error.
static JNIEnvExt* Create(Thread* self, JavaVMExt* vm, std::string* error_msg);
...
};

> \art\runtime\jnijni_env_ext.cc
``` C++
JNIEnvExt* JNIEnvExt::Create(Thread* self_in, JavaVMExt* vm_in, std::string* error_msg) {
std::unique_ptr<JNIEnvExt> ret(new JNIEnvExt(self_in, vm_in, error_msg));
if (CheckLocalsValid(ret.get())) {
return ret.release();
}
return nullptr;
}

JNIEnvExt::JNIEnvExt(Thread* self_in, JavaVMExt* vm_in, std::string* error_msg)
: self_(self_in),
vm_(vm_in),
local_ref_cookie_(kIRTFirstSegment),
locals_(1, kLocal, IndirectReferenceTable::ResizableCapacity::kYes, error_msg),
monitors_("monitors", kMonitorsInitial, kMonitorsMax),
critical_(0),
check_jni_(false),
runtime_deleted_(false) {
MutexLock mu(Thread::Current(), *Locks::jni_function_table_lock_);
check_jni_ = vm_in->IsCheckJniEnabled();
functions = GetFunctionTable(check_jni_);
unchecked_functions_ = GetJniNativeInterface();
}

const JNINativeInterface* JNIEnvExt::GetFunctionTable(bool check_jni) {
const JNINativeInterface* override = JNIEnvExt::table_override_;
if (override != nullptr) {
return override;
}
return check_jni ? GetCheckJniNativeInterface() : GetJniNativeInterface();
}

\art\runtime\jni\jni_internal.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
const JNINativeInterface* GetJniNativeInterface() {
// The template argument is passed down through the Encode/DecodeArtMethod/Field calls so if
// JniIdType is kPointer the calls will be a simple cast with no branches. This ensures that
// the normal case is still fast.
return Runtime::Current()->GetJniIdType() == JniIdType::kPointer
? &JniNativeInterfaceFunctions<false>::gJniNativeInterface
: &JniNativeInterfaceFunctions<true>::gJniNativeInterface;
}


template<bool kEnableIndexIds>
struct JniNativeInterfaceFunctions {
using JNIImpl = JNI<kEnableIndexIds>;
static constexpr JNINativeInterface gJniNativeInterface = {
...
JNIImpl::RegisterNatives,
JNIImpl::UnregisterNatives,
...
};
};


static jint RegisterNatives(JNIEnv* env,
jclass java_class,
const JNINativeMethod* methods,
jint method_count) {
...
ClassLinker* class_linker = Runtime::Current()->GetClassLinker();
ScopedObjectAccess soa(env);
StackHandleScope<1> hs(soa.Self());
Handle<mirror::Class> c = hs.NewHandle(soa.Decode<mirror::Class>(java_class));
...
CHECK_NON_NULL_ARGUMENT_FN_NAME("RegisterNatives", methods, JNI_ERR);
for (jint i = 0; i < method_count; ++i) {
const char* name = methods[i].name;
const char* sig = methods[i].signature;
const void* fnPtr = methods[i].fnPtr;
...
// Note: the right order is to try to find the method locally
// first, either as a direct or a virtual method. Then move to
// the parent.
ArtMethod* m = nullptr;
bool warn_on_going_to_parent = down_cast<JNIEnvExt*>(env)->GetVm()->IsCheckJniEnabled();
for (ObjPtr<mirror::Class> current_class = c.Get();
current_class != nullptr;
current_class = current_class->GetSuperClass()) {
// Search first only comparing methods which are native.
m = FindMethod<true>(current_class, name, sig);
if (m != nullptr) {
break;
}

// Search again comparing to all methods, to find non-native methods that match.
m = FindMethod<false>(current_class, name, sig);
if (m != nullptr) {
break;
}
...
}

if (m == nullptr) {
...
return JNI_ERR;
} else if (!m->IsNative()) {
...
return JNI_ERR;
}

VLOG(jni) << "[Registering JNI native method " << m->PrettyMethod() << "]";
...

const void* final_function_ptr = class_linker->RegisterNative(soa.Self(), m, fnPtr);
UNUSED(final_function_ptr);
}
return JNI_OK;
}

Dynamically JNI native functions registration is calling JNIEnv::RegisterNatives of current thread and then invokes JNIImpl::RegisterNatives which stored in the JNI function table and belongs to ART VM. JNIImpl::RegisterNatives function will register JNI Native function one by one, and will try super class if cannot find corresponding function name in current class method.

Java invoke native function

ART VM’s interpreter will invoke ArtInterpreterToCompiledCodeBridge function if ARTMethod ‘s entrypoint is compiled native code. ARTMethod’s entrypoint is entry_point_from_quick_compiled_code_, which is assigned to art_quick_generic_jni_trampoline if ARTMethod is native type function, art_quick_generic_jni_trampoline is a bridge function and finally invokes JNI native function.

1. Invoke entry_point_from_quick_compiled_code_ tracing

\art\runtime\interpreter\interpreter_common.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
void ArtInterpreterToCompiledCodeBridge(Thread* self,
ArtMethod* caller,
ShadowFrame* shadow_frame,
uint16_t arg_offset,
JValue* result)
REQUIRES_SHARED(Locks::mutator_lock_) {
ArtMethod* method = shadow_frame->GetMethod();
// Basic checks for the arg_offset. If there's no code item, the arg_offset must be 0. Otherwise,
// check that the arg_offset isn't greater than the number of registers. A stronger check is
// difficult since the frame may contain space for all the registers in the method, or only enough
// space for the arguments.
if (kIsDebugBuild) {
if (method->GetCodeItem() == nullptr) {
DCHECK_EQ(0u, arg_offset) << method->PrettyMethod();
} else {
DCHECK_LE(arg_offset, shadow_frame->NumberOfVRegs());
}
}
jit::Jit* jit = Runtime::Current()->GetJit();
if (jit != nullptr && caller != nullptr) {
jit->NotifyInterpreterToCompiledCodeTransition(self, caller);
}
method->Invoke(self, shadow_frame->GetVRegArgs(arg_offset),
(shadow_frame->NumberOfVRegs() - arg_offset) * sizeof(uint32_t),
result, method->GetInterfaceMethodIfProxy(kRuntimePointerSize)->GetShorty());
}

\art\runtime\art_method.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
void ArtMethod::Invoke(Thread* self, uint32_t* args, uint32_t args_size, JValue* result,
const char* shorty) {
...

Runtime* runtime = Runtime::Current();
// Call the invoke stub, passing everything as arguments.
// If the runtime is not yet started or it is required by the debugger, then perform the
// Invocation by the interpreter, explicitly forcing interpretation over JIT to prevent
// cycling around the various JIT/Interpreter methods that handle method invocation.
if (UNLIKELY(!runtime->IsStarted() ||
(self->IsForceInterpreter() && !IsNative() && !IsProxyMethod() && IsInvokable()))) {
...
} else {
...

if (!IsStatic()) {
(*art_quick_invoke_stub)(this, args, args_size, self, result, shorty);
} else {
(*art_quick_invoke_static_stub)(this, args, args_size, self, result, shorty);
}
...
} else {
LOG(INFO) << "Not invoking '" << PrettyMethod() << "' code=null";
if (result != nullptr) {
result->SetJ(0);
}
}
}

// Pop transition.
self->PopManagedStackFragment(fragment);
}

\art\runtime\arch\arm64\quick_entrypoints_arm64.S

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
ENTRY art_quick_invoke_stub
...

// Initialize routine offsets to 0 for integers and floats.
// x8 for integers, x15 for floating point.
mov x8, #0
mov x15, #0

add x10, x5, #1 // Load shorty address, plus one to skip return value.
ldr w1, [x9],#4 // Load "this" parameter, and increment arg pointer.

// Loop to fill registers.
.LfillRegisters:
ldrb w17, [x10], #1 // Load next character in signature, and increment.
cbz w17, .LcallFunction // Exit at end of signature. Shorty 0 terminated.

cmp w17, #'F' // is this a float?
bne .LisDouble

cmp x15, # 8*12 // Skip this load if all registers full.
beq .Ladvance4

add x17, x13, x15 // Calculate subroutine to jump to.
br x17

...

.LcallFunction:

INVOKE_STUB_CALL_AND_RETURN

END art_quick_invoke_stub

.macro INVOKE_STUB_CALL_AND_RETURN

REFRESH_MARKING_REGISTER
REFRESH_SUSPEND_CHECK_REGISTER

// load method-> METHOD_QUICK_CODE_OFFSET
ldr x9, [x0, #ART_METHOD_QUICK_CODE_OFFSET_64]
// Branch to method.
blr x9

...

.endm

\art\tools\cpp-define-generator\art_method.def

1
2
ASM_DEFINE(ART_METHOD_QUICK_CODE_OFFSET_64,
art::ArtMethod::EntryPointFromQuickCompiledCodeOffset(art::PointerSize::k64).Int32Value())

\art\runtime\art_method.h

1
2
3
4
5
static constexpr MemberOffset EntryPointFromQuickCompiledCodeOffset(PointerSize pointer_size) {
return MemberOffset(PtrSizedFieldsOffset(pointer_size) + OFFSETOF_MEMBER(
PtrSizedFields, entry_point_from_quick_compiled_code_) / sizeof(void*)
* static_cast<size_t>(pointer_size));
}

2. Set entry_point_from_quick_compiled_code_ tracing

\art\runtime\class_linker.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
static void LinkCode(ClassLinker* class_linker,
ArtMethod* method,
const OatFile::OatClass* oat_class,
uint32_t class_def_method_index) REQUIRES_SHARED(Locks::mutator_lock_) {
...

const void* quick_code = nullptr;
if (oat_class != nullptr) {
// Every kind of method should at least get an invoke stub from the oat_method.
// non-abstract methods also get their code pointers.
const OatFile::OatMethod oat_method = oat_class->GetOatMethod(class_def_method_index);
quick_code = oat_method.GetQuickCode();
}
runtime->GetInstrumentation()->InitializeMethodsCode(method, quick_code);

if (method->IsNative()) {
// Set up the dlsym lookup stub. Do not go through `UnregisterNative()`
// as the extra processing for @CriticalNative is not needed yet.
method->SetEntryPointFromJni(
method->IsCriticalNative() ? GetJniDlsymLookupCriticalStub() : GetJniDlsymLookupStub());
}
}

> \art\runtime\instrumentation.cc
``` C++
void Instrumentation::InitializeMethodsCode(ArtMethod* method, const void* aot_code)
REQUIRES_SHARED(Locks::mutator_lock_) {
...

// Use default entrypoints.
UpdateEntryPoints(
method, method->IsNative() ? GetQuickGenericJniStub() : GetQuickToInterpreterBridge());
}

static void UpdateEntryPoints(ArtMethod* method, const void* quick_code)
REQUIRES_SHARED(Locks::mutator_lock_) {
...
// If the method is from a boot image, don't dirty it if the entrypoint
// doesn't change.
if (method->GetEntryPointFromQuickCompiledCode() != quick_code) {
method->SetEntryPointFromQuickCompiledCode(quick_code);
}
}

\art\runtime\entrypoints\runtime_asm_entrypoints.h

1
2
3
static inline const void* GetQuickGenericJniStub() {
return reinterpret_cast<const void*>(art_quick_generic_jni_trampoline);
}

\art\runtime\art_method.h

1
2
3
4
5
6
7
8
9
10
11
12
13
void SetEntryPointFromQuickCompiledCode(const void* entry_point_from_quick_compiled_code)
REQUIRES_SHARED(Locks::mutator_lock_) {
SetEntryPointFromQuickCompiledCodePtrSize(entry_point_from_quick_compiled_code,
kRuntimePointerSize);
}

ALWAYS_INLINE void SetEntryPointFromQuickCompiledCodePtrSize(
const void* entry_point_from_quick_compiled_code, PointerSize pointer_size)
REQUIRES_SHARED(Locks::mutator_lock_) {
SetNativePointer(EntryPointFromQuickCompiledCodeOffset(pointer_size),
entry_point_from_quick_compiled_code,
pointer_size);
}

\art\runtime\entrypoints\runtime_asm_entrypoints.h

1
2
3
4
5
// Return the address of quick stub code for handling JNI calls.
extern "C" void art_quick_generic_jni_trampoline(ArtMethod*);
static inline const void* GetQuickGenericJniStub() {
return reinterpret_cast<const void*>(art_quick_generic_jni_trampoline);
}

3. Invoke native function tracing

\art\runtime\arch\arm64\quick_entrypoints_arm64.S

1
2
3
4
5
6
7
8
9
10
11
12
13
ENTRY art_quick_generic_jni_trampoline
...
// prepare for artQuickGenericJniTrampoline call
// (Thread*, managed_sp, reserved_area)
// x0 x1 x2 <= C calling convention
// xSELF xFP sp <= where they are

mov x0, xSELF // Thread*
mov x1, xFP // SP for the managed frame.
mov x2, sp // reserved area for arguments and other saved data (up to managed frame)
bl artQuickGenericJniTrampoline // (Thread*, sp)
...
END art_quick_generic_jni_trampoline

\art\runtime\entrypoints\quick\quick_trampoline_entrypoints.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
extern "C" const void* artQuickGenericJniTrampoline(Thread* self,
ArtMethod** managed_sp,
uintptr_t* reserved_area)
REQUIRES_SHARED(Locks::mutator_lock_) NO_THREAD_SAFETY_ANALYSIS {
...
ArtMethod* called = *managed_sp;
...

// Retrieve the stored native code.
// Note that it may point to the lookup stub or trampoline.
// FIXME: This is broken for @CriticalNative as the art_jni_dlsym_lookup_stub
// does not handle that case. Calls from compiled stubs are also broken.
void const* nativeCode = called->GetEntryPointFromJni();

...

// Return native code.
return nativeCode;
}

\art\runtime\art_method.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
void* GetEntryPointFromJni() const {
DCHECK(IsNative());
return GetEntryPointFromJniPtrSize(kRuntimePointerSize);
}

ALWAYS_INLINE void* GetEntryPointFromJniPtrSize(PointerSize pointer_size) const {
return GetDataPtrSize(pointer_size);
}

ALWAYS_INLINE void* GetDataPtrSize(PointerSize pointer_size) const {
DCHECK(IsImagePointerSize(pointer_size));
return GetNativePointer<void*>(DataOffset(pointer_size), pointer_size);
}

ALWAYS_INLINE T GetNativePointer(MemberOffset offset, PointerSize pointer_size) const {
static_assert(std::is_pointer<T>::value, "T must be a pointer type");
const auto addr = reinterpret_cast<uintptr_t>(this) + offset.Uint32Value();
if (pointer_size == PointerSize::k32) {
return reinterpret_cast<T>(*reinterpret_cast<const uint32_t*>(addr));
} else {
auto v = *reinterpret_cast<const uint64_t*>(addr);
return reinterpret_cast<T>(dchecked_integral_cast<uintptr_t>(v));
}
}

\art\runtime\arch\arm64\quick_entrypoints_arm64.S

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
ENTRY art_quick_generic_jni_trampoline
...
// prepare for artQuickGenericJniTrampoline call
// (Thread*, managed_sp, reserved_area)
// x0 x1 x2 <= C calling convention
// xSELF xFP sp <= where they are

mov x0, xSELF // Thread*
mov x1, xFP // SP for the managed frame.
mov x2, sp // reserved area for arguments and other saved data (up to managed frame)
bl artQuickGenericJniTrampoline // (Thread*, sp)

// The C call will have registered the complete save-frame on success.
// The result of the call is:
// x0: pointer to native code, 0 on error.
// The bottom of the reserved area contains values for arg registers,
// hidden arg register and SP for out args for the call.

// Check for error (class init check or locking for synchronized native method can throw).
cbz x0, .Lexception_in_native

// Save the code pointer
mov xIP0, x0

// Load parameters from frame into registers.
ldp x0, x1, [sp]
ldp x2, x3, [sp, #16]
ldp x4, x5, [sp, #32]
ldp x6, x7, [sp, #48]

ldp d0, d1, [sp, #64]
ldp d2, d3, [sp, #80]
ldp d4, d5, [sp, #96]
ldp d6, d7, [sp, #112]

// Load hidden arg (x15) for @CriticalNative and SP for out args.
ldp x15, xIP1, [sp, #128]

// Apply the new SP for out args, releasing unneeded reserved area.
mov sp, xIP1

blr xIP0 // native call.
...
END art_quick_generic_jni_trampoline

entry_point_from_quick_compiled_code_ is an attibute of ARTMethod, as a bridge function, it has three types:

  1. If java function has been compiled to quick code, entry_point_from_quick_compiled_code_ is the adress of quick code.
  2. If java function does not exist quick code, entry_point_from_quick_compiled_code_ is the adress of artQuickToInterpreterBridge, java byte code will be executed by ART interpreter.
  3. If java native function does not exist quick code, entry_point_from_quick_compiled_code_ is the address of art_quick_generic_jni_trampoline, JNI native code will be called at last.

Native invoke Java function

For example, translating jstring to std::string,
mainly has tow steps:

  1. Finding target class.
  2. Getting class method id.
  3. Calling class method.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
 std::string jstring_2_std_string(JNIEnv *env, jstring jstr)
{
std::string ret;
if (!env || !jstr) return ret;

jclass cls_string = env->FindClass("java/lang/String");
jstring encode = env->NewStringUTF("UTF-8");
jmethodID mid = env->GetMethodID(cls_string, "getBytes", "(Ljava/lang/String;)[B");
jbyteArray barr = (jbyteArray)env->CallObjectMethod(jstr, mid, encode);
THROW_VOID(env);
jsize alen = env->GetArrayLength(barr);
jbyte* ba = env->GetByteArrayElements(barr, JNI_FALSE);
if(alen > 0)
{
ret.append((char*)ba, alen);
}
env->ReleaseByteArrayElements(barr,ba,0);

// release local ref
if (cls_string) {
env->DeleteLocalRef(cls_string);
}
if (encode) {
env->DeleteLocalRef(encode);
}
return ret;
}

1.Find Java Class

\art\runtime\jni\jni_internal.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
static jclass FindClass(JNIEnv* env, const char* name) {
CHECK_NON_NULL_ARGUMENT(name);
Runtime* runtime = Runtime::Current();
ClassLinker* class_linker = runtime->GetClassLinker();
std::string descriptor(NormalizeJniClassDescriptor(name));
ScopedObjectAccess soa(env);
ObjPtr<mirror::Class> c = nullptr;
if (runtime->IsStarted()) {
StackHandleScope<1> hs(soa.Self());
Handle<mirror::ClassLoader> class_loader(hs.NewHandle(GetClassLoader<kEnableIndexIds>(soa)));
c = class_linker->FindClass(soa.Self(), descriptor.c_str(), class_loader);
} else {
c = class_linker->FindSystemClass(soa.Self(), descriptor.c_str());
}
return soa.AddLocalReference<jclass>(c);
}

\art\runtime\jni\class_linker.h

1
2
3
4
5
6
7
// Finds a class by its descriptor using the "system" class loader, ie by searching the
// boot_class_path_.
ObjPtr<mirror::Class> FindSystemClass(Thread* self, const char* descriptor)
REQUIRES_SHARED(Locks::mutator_lock_)
REQUIRES(!Locks::dex_lock_) {
return FindClass(self, descriptor, ScopedNullHandle<mirror::ClassLoader>());
}

\art\runtime\jni\class_linker.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
ObjPtr<mirror::Class> ClassLinker::FindClass(Thread* self,
const char* descriptor,
Handle<mirror::ClassLoader> class_loader) {
DCHECK_NE(*descriptor, '\0') << "descriptor is empty string";
DCHECK(self != nullptr);
self->AssertNoPendingException();
self->PoisonObjectPointers(); // For DefineClass, CreateArrayClass, etc...
if (descriptor[1] == '\0') {
// only the descriptors of primitive types should be 1 character long, also avoid class lookup
// for primitive classes that aren't backed by dex files.
return FindPrimitiveClass(descriptor[0]);
}
const size_t hash = ComputeModifiedUtf8Hash(descriptor);
// Find the class in the loaded classes table.
ObjPtr<mirror::Class> klass = LookupClass(self, descriptor, hash, class_loader.Get());
if (klass != nullptr) {
return EnsureResolved(self, descriptor, klass);
}
// Class is not yet loaded.
if (descriptor[0] != '[' && class_loader == nullptr) {
// Non-array class and the boot class loader, search the boot class path.
ClassPathEntry pair = FindInClassPath(descriptor, hash, boot_class_path_);
if (pair.second != nullptr) {
return DefineClass(self,
descriptor,
hash,
ScopedNullHandle<mirror::ClassLoader>(),
*pair.first,
*pair.second);
} else {
// The boot class loader is searched ahead of the application class loader, failures are
// expected and will be wrapped in a ClassNotFoundException. Use the pre-allocated error to
// trigger the chaining with a proper stack trace.
ObjPtr<mirror::Throwable> pre_allocated =
Runtime::Current()->GetPreAllocatedNoClassDefFoundError();
self->SetException(pre_allocated);
return nullptr;
}
}
...
}

Look up the class from the map in the current classLoader according to the descriptor, find the class in the method area, generate the class object and return.
If class is not loaded yet, the bytecode will be loaded from dex firstly, then pushed to the method area via DefineClass to generate the class and return.

2. Get Java Class Method ID

\art\runtime\jni\jni_internal.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
  static jmethodID GetMethodID(JNIEnv* env, jclass java_class, const char* name, const char* sig) {
CHECK_NON_NULL_ARGUMENT(java_class);
CHECK_NON_NULL_ARGUMENT(name);
CHECK_NON_NULL_ARGUMENT(sig);
ScopedObjectAccess soa(env);
return FindMethodID<kEnableIndexIds>(soa, java_class, name, sig, false);
}

template<bool kEnableIndexIds>
static jmethodID FindMethodID(ScopedObjectAccess& soa, jclass jni_class,
const char* name, const char* sig, bool is_static)
REQUIRES_SHARED(Locks::mutator_lock_) {
return jni::EncodeArtMethod<kEnableIndexIds>(FindMethodJNI(soa, jni_class, name, sig, is_static));
}

ArtMethod* FindMethodJNI(const ScopedObjectAccess& soa,
jclass jni_class,
const char* name,
const char* sig,
bool is_static) {
ObjPtr<mirror::Class> c = EnsureInitialized(soa.Self(), soa.Decode<mirror::Class>(jni_class));
if (c == nullptr) {
return nullptr;
}
ArtMethod* method = nullptr;
auto pointer_size = Runtime::Current()->GetClassLinker()->GetImagePointerSize();
if (c->IsInterface()) {
method = c->FindInterfaceMethod(name, sig, pointer_size);
} else {
method = c->FindClassMethod(name, sig, pointer_size);
}
...
return method;
}

\art\runtime\mirror\class.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
ArtMethod* Class::FindClassMethod(std::string_view name,
std::string_view signature,
PointerSize pointer_size) {
return FindClassMethodWithSignature(this, name, signature, pointer_size);
}

template <typename SignatureType>
static inline ArtMethod* FindClassMethodWithSignature(ObjPtr<Class> this_klass,
std::string_view name,
const SignatureType& signature,
PointerSize pointer_size)
REQUIRES_SHARED(Locks::mutator_lock_) {
// Search declared methods first.
for (ArtMethod& method : this_klass->GetDeclaredMethodsSlice(pointer_size)) {
ArtMethod* np_method = method.GetInterfaceMethodIfProxy(pointer_size);
if (np_method->GetNameView() == name && np_method->GetSignature() == signature) {
return &method;
}
}

// Then search the superclass chain. If we find an inherited method, return it.
// If we find a method that's not inherited because of access restrictions,
// try to find a method inherited from an interface in copied methods.
ObjPtr<Class> klass = this_klass->GetSuperClass();
ArtMethod* uninherited_method = nullptr;
for (; klass != nullptr; klass = klass->GetSuperClass()) {
DCHECK(!klass->IsProxyClass());
for (ArtMethod& method : klass->GetDeclaredMethodsSlice(pointer_size)) {
if (method.GetNameView() == name && method.GetSignature() == signature) {
if (IsInheritedMethod(this_klass, klass, method)) {
return &method;
}
uninherited_method = &method;
break;
}
}
if (uninherited_method != nullptr) {
break;
}
}

// Then search copied methods.
// If we found a method that's not inherited, stop the search in its declaring class.
ObjPtr<Class> end_klass = klass;
DCHECK_EQ(uninherited_method != nullptr, end_klass != nullptr);
klass = this_klass;
if (UNLIKELY(klass->IsProxyClass())) {
DCHECK(klass->GetCopiedMethodsSlice(pointer_size).empty());
klass = klass->GetSuperClass();
}
for (; klass != end_klass; klass = klass->GetSuperClass()) {
DCHECK(!klass->IsProxyClass());
for (ArtMethod& method : klass->GetCopiedMethodsSlice(pointer_size)) {
if (method.GetNameView() == name && method.GetSignature() == signature) {
return &method; // No further check needed, copied methods are inherited by definition.
}
}
}
return uninherited_method; // Return the `uninherited_method` if any.
}

Class method searching is similar to dynamically native function registration, should match both method name and parameters signature. If searching missed, superclass chain and copied methods will be tried.

3. Call Java Class Method

\art\runtime\jni\jni_internal.cc

1
2
3
4
5
6
7
8
9
10
static jobject CallObjectMethod(JNIEnv* env, jobject obj, jmethodID mid, ...) {
va_list ap;
va_start(ap, mid);
ScopedVAArgs free_args_later(&ap);
CHECK_NON_NULL_ARGUMENT(obj);
CHECK_NON_NULL_ARGUMENT(mid);
ScopedObjectAccess soa(env);
JValue result(InvokeVirtualOrInterfaceWithVarArgs(soa, obj, mid, ap));
return soa.AddLocalReference<jobject>(result.GetL());
}

\art\runtime\reflection.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
template <>
JValue InvokeVirtualOrInterfaceWithVarArgs(const ScopedObjectAccessAlreadyRunnable& soa,
jobject obj,
ArtMethod* interface_method,
va_list args) {
// We want to make sure that the stack is not within a small distance from the
// protected region in case we are calling into a leaf function whose stack
// check has been elided.
if (UNLIKELY(__builtin_frame_address(0) < soa.Self()->GetStackEnd())) {
ThrowStackOverflowError(soa.Self());
return JValue();
}

ObjPtr<mirror::Object> receiver = soa.Decode<mirror::Object>(obj);
ArtMethod* method = FindVirtualMethod(receiver, interface_method);
bool is_string_init = method->IsStringConstructor();
if (is_string_init) {
// Replace calls to String.<init> with equivalent StringFactory call.
method = WellKnownClasses::StringInitToStringFactory(method);
receiver = nullptr;
}
uint32_t shorty_len = 0;
const char* shorty =
method->GetInterfaceMethodIfProxy(kRuntimePointerSize)->GetShorty(&shorty_len);
JValue result;
ArgArray arg_array(shorty, shorty_len);
arg_array.BuildArgArrayFromVarArgs(soa, receiver, args);
InvokeWithArgArray(soa, method, &arg_array, &result, shorty);
if (is_string_init) {
// For string init, remap original receiver to StringFactory result.
UpdateReference(soa.Self(), obj, result.GetL());
}
return result;
}

void InvokeWithArgArray(const ScopedObjectAccessAlreadyRunnable& soa,
ArtMethod* method, ArgArray* arg_array, JValue* result,
const char* shorty)
REQUIRES_SHARED(Locks::mutator_lock_) {
uint32_t* args = arg_array->GetArray();
if (UNLIKELY(soa.Env()->IsCheckJniEnabled())) {
CheckMethodArguments(soa.Vm(), method->GetInterfaceMethodIfProxy(kRuntimePointerSize), args);
}
method->Invoke(soa.Self(), args, arg_array->GetNumBytes(), result, shorty);
}

\art\runtime\art_method.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
void ArtMethod::Invoke(Thread* self, uint32_t* args, uint32_t args_size, JValue* result,
const char* shorty) {
...
Runtime* runtime = Runtime::Current();
// Call the invoke stub, passing everything as arguments.
// If the runtime is not yet started or it is required by the debugger, then perform the
// Invocation by the interpreter, explicitly forcing interpretation over JIT to prevent
// cycling around the various JIT/Interpreter methods that handle method invocation.
if (UNLIKELY(!runtime->IsStarted() ||
(self->IsForceInterpreter() && !IsNative() && !IsProxyMethod() && IsInvokable()))) {
if (IsStatic()) {
art::interpreter::EnterInterpreterFromInvoke(
self, this, nullptr, args, result, /*stay_in_interpreter=*/ true);
} else {
mirror::Object* receiver =
reinterpret_cast<StackReference<mirror::Object>*>(&args[0])->AsMirrorPtr();
art::interpreter::EnterInterpreterFromInvoke(
self, this, receiver, args + 1, result, /*stay_in_interpreter=*/ true);
}
} else {
DCHECK_EQ(runtime->GetClassLinker()->GetImagePointerSize(), kRuntimePointerSize);

constexpr bool kLogInvocationStartAndReturn = false;
bool have_quick_code = GetEntryPointFromQuickCompiledCode() != nullptr;
if (LIKELY(have_quick_code)) {
...
if (!IsStatic()) {
(*art_quick_invoke_stub)(this, args, args_size, self, result, shorty);
} else {
(*art_quick_invoke_static_stub)(this, args, args_size, self, result, shorty);
}
...
} else {
LOG(INFO) << "Not invoking '" << PrettyMethod() << "' code=null";
if (result != nullptr) {
result->SetJ(0);
}
}
}

// Pop transition.
self->PopManagedStackFragment(fragment);
}

\art\runtime\arch\arm64\quick_entrypoints_arm64.S

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
ENTRY art_quick_invoke_stub
...

// Initialize routine offsets to 0 for integers and floats.
// x8 for integers, x15 for floating point.
mov x8, #0
mov x15, #0

add x10, x5, #1 // Load shorty address, plus one to skip return value.
ldr w1, [x9],#4 // Load "this" parameter, and increment arg pointer.

// Loop to fill registers.
.LfillRegisters:
ldrb w17, [x10], #1 // Load next character in signature, and increment.
cbz w17, .LcallFunction // Exit at end of signature. Shorty 0 terminated.

cmp w17, #'F' // is this a float?
bne .LisDouble

cmp x15, # 8*12 // Skip this load if all registers full.
beq .Ladvance4

add x17, x13, x15 // Calculate subroutine to jump to.
br x17

...

.LcallFunction:

INVOKE_STUB_CALL_AND_RETURN

END art_quick_invoke_stub

.macro INVOKE_STUB_CALL_AND_RETURN

REFRESH_MARKING_REGISTER
REFRESH_SUSPEND_CHECK_REGISTER

// load method-> METHOD_QUICK_CODE_OFFSET
ldr x9, [x0, #ART_METHOD_QUICK_CODE_OFFSET_64]
// Branch to method.
blr x9

...

.endm

\art\tools\cpp-define-generator\art_method.def

1
2
ASM_DEFINE(ART_METHOD_QUICK_CODE_OFFSET_64,
art::ArtMethod::EntryPointFromQuickCompiledCodeOffset(art::PointerSize::k64).Int32Value())

\art\runtime\art_method.h

1
2
3
4
5
static constexpr MemberOffset EntryPointFromQuickCompiledCodeOffset(PointerSize pointer_size) {
return MemberOffset(PtrSizedFieldsOffset(pointer_size) + OFFSETOF_MEMBER(
PtrSizedFields, entry_point_from_quick_compiled_code_) / sizeof(void*)
* static_cast<size_t>(pointer_size));
}

finally entry_point_from_quick_compiled_code_ function will be called, entry_point_from_quick_compiled_code_ is a bridge, it will be assigned to art_quick_to_interpreter_bridge when class is loaded if it has no quit code.

\art\runtime\class_linker.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
static void LinkCode(ClassLinker* class_linker,
ArtMethod* method,
const OatFile::OatClass* oat_class,
uint32_t class_def_method_index) REQUIRES_SHARED(Locks::mutator_lock_) {
...

const void* quick_code = nullptr;
if (oat_class != nullptr) {
// Every kind of method should at least get an invoke stub from the oat_method.
// non-abstract methods also get their code pointers.
const OatFile::OatMethod oat_method = oat_class->GetOatMethod(class_def_method_index);
quick_code = oat_method.GetQuickCode();
}
runtime->GetInstrumentation()->InitializeMethodsCode(method, quick_code);

if (method->IsNative()) {
// Set up the dlsym lookup stub. Do not go through `UnregisterNative()`
// as the extra processing for @CriticalNative is not needed yet.
method->SetEntryPointFromJni(
method->IsCriticalNative() ? GetJniDlsymLookupCriticalStub() : GetJniDlsymLookupStub());
}
}

\art\runtime\instrumentation.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
void Instrumentation::InitializeMethodsCode(ArtMethod* method, const void* aot_code)
REQUIRES_SHARED(Locks::mutator_lock_) {
if (!method->IsInvokable()) {
DCHECK(method->GetEntryPointFromQuickCompiledCode() == nullptr ||
Runtime::Current()->GetClassLinker()->IsQuickToInterpreterBridge(
method->GetEntryPointFromQuickCompiledCode()));
UpdateEntryPoints(method, GetQuickToInterpreterBridge());
return;
}

// Use instrumentation entrypoints if instrumentation is installed.
if (UNLIKELY(EntryExitStubsInstalled()) && !IsProxyInit(method)) {
if (!method->IsNative() && InterpretOnly(method)) {
UpdateEntryPoints(method, GetQuickToInterpreterBridge());
} else {
UpdateEntryPoints(method, GetQuickInstrumentationEntryPoint());
}
return;
}

if (UNLIKELY(IsForcedInterpretOnly() || IsDeoptimized(method))) {
UpdateEntryPoints(
method, method->IsNative() ? GetQuickGenericJniStub() : GetQuickToInterpreterBridge());
return;
}

// Special case if we need an initialization check.
if (NeedsClinitCheckBeforeCall(method) && !method->GetDeclaringClass()->IsVisiblyInitialized()) {
// If we have code but the method needs a class initialization check before calling
// that code, install the resolution stub that will perform the check.
// It will be replaced by the proper entry point by ClassLinker::FixupStaticTrampolines
// after initializing class (see ClassLinker::InitializeClass method).
// Note: this mimics the logic in image_writer.cc that installs the resolution
// stub only if we have compiled code or we can execute nterp, and the method needs a class
// initialization check.
if (aot_code != nullptr || method->IsNative() || CanUseNterp(method)) {
if (kIsDebugBuild && CanUseNterp(method)) {
// Adds some test coverage for the nterp clinit entrypoint.
UpdateEntryPoints(method, interpreter::GetNterpWithClinitEntryPoint());
} else {
UpdateEntryPoints(method, GetQuickResolutionStub());
}
} else {
UpdateEntryPoints(method, GetQuickToInterpreterBridge());
}
return;
}

// Use the provided AOT code if possible.
if (CanUseAotCode(aot_code)) {
UpdateEntryPoints(method, aot_code);
return;
}

// We check if the class is verified as we need the slow interpreter for lock verification.
// If the class is not verified, This will be updated in
// ClassLinker::UpdateClassAfterVerification.
if (CanUseNterp(method)) {
UpdateEntryPoints(method, interpreter::GetNterpEntryPoint());
return;
}

// Use default entrypoints.
UpdateEntryPoints(
method, method->IsNative() ? GetQuickGenericJniStub() : GetQuickToInterpreterBridge());
}

static void UpdateEntryPoints(ArtMethod* method, const void* quick_code)
REQUIRES_SHARED(Locks::mutator_lock_) {
...
// If the method is from a boot image, don't dirty it if the entrypoint
// doesn't change.
if (method->GetEntryPointFromQuickCompiledCode() != quick_code) {
method->SetEntryPointFromQuickCompiledCode(quick_code);
}
}

\art\runtime\art_method.h

1
2
3
4
5
6
7
8
9
10
11
12
void SetEntryPointFromQuickCompiledCode(const void* entry_point_from_quick_compiled_code)
REQUIRES_SHARED(Locks::mutator_lock_) {
SetEntryPointFromQuickCompiledCodePtrSize(entry_point_from_quick_compiled_code,
kRuntimePointerSize);
}

ALWAYS_INLINE void SetEntryPointFromQuickCompiledCodePtrSize(
const void* entry_point_from_quick_compiled_code, PointerSize pointer_size)
REQUIRES_SHARED(Locks::mutator_lock_) {
SetNativePointer(EntryPointFromQuickCompiledCodeOffset(pointer_size),
entry_point_from_quick_compiled_code,
pointer_size);

\art\runtime\entrypoints\runtime_asm_entrypoints.h

1
2
3
4
5
// Return the address of quick stub code for bridging from quick code to the interpreter.
extern "C" void art_quick_to_interpreter_bridge(ArtMethod*);
static inline const void* GetQuickToInterpreterBridge() {
return reinterpret_cast<const void*>(art_quick_to_interpreter_bridge);
}

LinkCode will judge method type when initialize method code, if method has been compiled to aot code, ART execute quick code directly, or ART will interpret java bytecode. We assume that method is not optimized to quick code, let’s tracing following interpreter code.

4. Invoke art_quick_to_interpreter_bridge

C++ thread’s stack, regiters and PC(Program Counter) are managed by the C++ compiler, but to JVM, should manage these function calling states. ART interpreter has his own calling stack named ShadowFrame which contains function calling context. Java Function’s instructions will be execute one by one refer to DEX_INSTRUCTION_LIST smali table, instructions like “invoke-virtual”, “invoke-static”, “invoke-direct” etc. these instructions will invoke new methods calling, and start new ART calling cycles(lower efficient than native code).

\art\runtime\arch\arm64\quick_entrypoints_arm64.S

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
ENTRY art_quick_to_interpreter_bridge
SETUP_SAVE_REFS_AND_ARGS_FRAME // Set up frame and save arguments.

// x0 will contain mirror::ArtMethod* method.
mov x1, xSELF // How to get Thread::Current() ???
mov x2, sp

// uint64_t artQuickToInterpreterBridge(mirror::ArtMethod* method, Thread* self,
// mirror::ArtMethod** sp)
bl artQuickToInterpreterBridge

RESTORE_SAVE_REFS_AND_ARGS_FRAME // TODO: no need to restore arguments in this case.
REFRESH_MARKING_REGISTER

fmov d0, x0

RETURN_OR_DELIVER_PENDING_EXCEPTION
END art_quick_to_interpreter_bridge

\art\runtime\entrypoints\quick\quick_trampoline_entrypoints.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
extern "C" uint64_t artQuickToInterpreterBridge(ArtMethod* method, Thread* self, ArtMethod** sp)
REQUIRES_SHARED(Locks::mutator_lock_) {
...

JValue result;
bool force_frame_pop = false;

ArtMethod* non_proxy_method = method->GetInterfaceMethodIfProxy(kRuntimePointerSize);
..
if (UNLIKELY(deopt_frame != nullptr)) {
HandleDeoptimization(&result, method, deopt_frame, &fragment);
} else {
...
result = interpreter::EnterInterpreterFromEntryPoint(self, accessor, shadow_frame);
force_frame_pop = shadow_frame->GetForcePopFrame();
}

...

// No need to restore the args since the method has already been run by the interpreter.
return result.GetJ();
}

\art\runtime\interpreter\interpreter.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
JValue EnterInterpreterFromEntryPoint(Thread* self, const CodeItemDataAccessor& accessor,
ShadowFrame* shadow_frame) {
DCHECK_EQ(self, Thread::Current());
bool implicit_check = Runtime::Current()->GetImplicitStackOverflowChecks();
if (UNLIKELY(__builtin_frame_address(0) < self->GetStackEndForInterpreter(implicit_check))) {
ThrowStackOverflowError(self);
return JValue();
}

jit::Jit* jit = Runtime::Current()->GetJit();
if (jit != nullptr) {
jit->NotifyCompiledCodeToInterpreterTransition(self, shadow_frame->GetMethod());
}
return Execute(self, accessor, *shadow_frame, JValue());
}

static inline JValue Execute(
Thread* self,
const CodeItemDataAccessor& accessor,
ShadowFrame& shadow_frame,
JValue result_register,
bool stay_in_interpreter = false,
bool from_deoptimize = false) REQUIRES_SHARED(Locks::mutator_lock_) {
...

if (LIKELY(!from_deoptimize)) { // Entering the method, but not via deoptimization.
...
ArtMethod *method = shadow_frame.GetMethod();

// If we can continue in JIT and have JITed code available execute JITed code.
if (!stay_in_interpreter && !self->IsForceInterpreter() && !shadow_frame.GetForcePopFrame()) {
jit::Jit* jit = Runtime::Current()->GetJit();
if (jit != nullptr) {
jit->MethodEntered(self, shadow_frame.GetMethod());
if (jit->CanInvokeCompiledCode(method)) {
JValue result;

// Pop the shadow frame before calling into compiled code.
self->PopShadowFrame();
// Calculate the offset of the first input reg. The input registers are in the high regs.
// It's ok to access the code item here since JIT code will have been touched by the
// interpreter and compiler already.
uint16_t arg_offset = accessor.RegistersSize() - accessor.InsSize();
ArtInterpreterToCompiledCodeBridge(self, nullptr, &shadow_frame, arg_offset, &result);
// Push the shadow frame back as the caller will expect it.
self->PushShadowFrame(&shadow_frame);

return result;
}
}
}

...
}

ArtMethod* method = shadow_frame.GetMethod();
...
return ExecuteSwitch(
self, accessor, shadow_frame, result_register, /*interpret_one_instruction=*/ false);
}

static JValue ExecuteSwitch(Thread* self,
const CodeItemDataAccessor& accessor,
ShadowFrame& shadow_frame,
JValue result_register,
bool interpret_one_instruction) REQUIRES_SHARED(Locks::mutator_lock_) {
if (Runtime::Current()->IsActiveTransaction()) {
if (shadow_frame.GetMethod()->SkipAccessChecks()) {
return ExecuteSwitchImpl<false, true>(
self, accessor, shadow_frame, result_register, interpret_one_instruction);
} else {
return ExecuteSwitchImpl<true, true>(
self, accessor, shadow_frame, result_register, interpret_one_instruction);
}
} else {
if (shadow_frame.GetMethod()->SkipAccessChecks()) {
return ExecuteSwitchImpl<false, false>(
self, accessor, shadow_frame, result_register, interpret_one_instruction);
} else {
return ExecuteSwitchImpl<true, false>(
self, accessor, shadow_frame, result_register, interpret_one_instruction);
}
}
}

\art\runtime\interpreter\interpreter_switch_impl.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
template<bool do_access_check, bool transaction_active>
ALWAYS_INLINE JValue ExecuteSwitchImpl(Thread* self, const CodeItemDataAccessor& accessor,
ShadowFrame& shadow_frame, JValue result_register,
bool interpret_one_instruction)
REQUIRES_SHARED(Locks::mutator_lock_) {
SwitchImplContext ctx {
.self = self,
.accessor = accessor,
.shadow_frame = shadow_frame,
.result_register = result_register,
.interpret_one_instruction = interpret_one_instruction,
.result = JValue(),
};
void* impl = reinterpret_cast<void*>(&ExecuteSwitchImplCpp<do_access_check, transaction_active>);
const uint16_t* dex_pc = ctx.accessor.Insns();
ExecuteSwitchImplAsm(&ctx, impl, dex_pc);
return ctx.result;
}

\art\runtime\interpreter\interpreter_switch_impl-inl.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
template<bool do_access_check, bool transaction_active>
NO_STACK_PROTECTOR
void ExecuteSwitchImplCpp(SwitchImplContext* ctx) {
Thread* self = ctx->self;
const CodeItemDataAccessor& accessor = ctx->accessor;
ShadowFrame& shadow_frame = ctx->shadow_frame;
self->VerifyStack();

uint32_t dex_pc = shadow_frame.GetDexPC();
const auto* const instrumentation = Runtime::Current()->GetInstrumentation();
const uint16_t* const insns = accessor.Insns();
const Instruction* next = Instruction::At(insns + dex_pc);

DCHECK(!shadow_frame.GetForceRetryInstruction())
<< "Entered interpreter from invoke without retry instruction being handled!";

bool const interpret_one_instruction = ctx->interpret_one_instruction;
while (true) {
const Instruction* const inst = next;
dex_pc = inst->GetDexPc(insns);
shadow_frame.SetDexPC(dex_pc);
TraceExecution(shadow_frame, inst, dex_pc);
uint16_t inst_data = inst->Fetch16(0);
bool exit = false;
bool success; // Moved outside to keep frames small under asan.
if (InstructionHandler<do_access_check, transaction_active, Instruction::kInvalidFormat>(
ctx, instrumentation, self, shadow_frame, dex_pc, inst, inst_data, next, exit).
Preamble()) {
DCHECK_EQ(self->IsExceptionPending(), inst->Opcode(inst_data) == Instruction::MOVE_EXCEPTION);
switch (inst->Opcode(inst_data)) {
#define OPCODE_CASE(OPCODE, OPCODE_NAME, NAME, FORMAT, i, a, e, v) \
case OPCODE: { \
next = inst->RelativeAt(Instruction::SizeInCodeUnits(Instruction::FORMAT)); \
success = OP_##OPCODE_NAME<do_access_check, transaction_active>( \
ctx, instrumentation, self, shadow_frame, dex_pc, inst, inst_data, next, exit); \
if (success && LIKELY(!interpret_one_instruction)) { \
continue; \
} \
break; \
}
DEX_INSTRUCTION_LIST(OPCODE_CASE)
#undef OPCODE_CASE
}
}
...
}
}

\art\libdexfile\dex\dex_instruction_list.h

1
2
3
4
5
6
7
8
9
10
11
12
// V(opcode, instruction_code, name, format, index, flags, extended_flags, verifier_flags);
#define DEX_INSTRUCTION_LIST(V) \
V(0x00, NOP, "nop", k10x, kIndexNone, kContinue, 0, kVerifyNothing) \
V(0x01, MOVE, "move", k12x, kIndexNone, kContinue, 0, kVerifyRegA | kVerifyRegB) \
V(0x02, MOVE_FROM16, "move/from16", k22x, kIndexNone, kContinue, 0, kVerifyRegA | kVerifyRegB) \
V(0x03, MOVE_16, "move/16", k32x, kIndexNone, kContinue, 0, kVerifyRegA | kVerifyRegB) \
...
V(0x6E, INVOKE_VIRTUAL, "invoke-virtual", k35c, kIndexMethodRef, kContinue | kThrow | kInvoke, 0, kVerifyRegBMethod | kVerifyVarArgNonZero) \
V(0x6F, INVOKE_SUPER, "invoke-super", k35c, kIndexMethodRef, kContinue | kThrow | kInvoke, 0, kVerifyRegBMethod | kVerifyVarArgNonZero) \
V(0x70, INVOKE_DIRECT, "invoke-direct", k35c, kIndexMethodRef, kContinue | kThrow | kInvoke, 0, kVerifyRegBMethod | kVerifyVarArgNonZero) \
V(0x71, INVOKE_STATIC, "invoke-static", k35c, kIndexMethodRef, kContinue | kThrow | kInvoke, 0, kVerifyRegBMethod | kVerifyVarArg) \
...

\art\runtime\interpreter\interpreter_switch_impl-inl.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
class InstructionHandler {
...
HANDLER_ATTRIBUTES bool INVOKE_VIRTUAL() {
return HandleInvoke<kVirtual, /*is_range=*/ false>();
}
...
HANDLER_ATTRIBUTES bool INVOKE_SUPER() {
return HandleInvoke<kSuper, /*is_range=*/ false>();
}
...
HANDLER_ATTRIBUTES bool INVOKE_DIRECT() {
return HandleInvoke<kDirect, /*is_range=*/ false>();
}
...
HANDLER_ATTRIBUTES bool INVOKE_STATIC() {
return HandleInvoke<kStatic, /*is_range=*/ false>();
}
...
template<InvokeType type, bool is_range>
HANDLER_ATTRIBUTES bool HandleInvoke() {
bool success = DoInvoke<type, is_range, do_access_check>(
Self(), shadow_frame_, inst_, inst_data_, ResultRegister());
return PossiblyHandlePendingExceptionOnInvoke(!success);
}
...
};

\art\runtime\interpreter\interpreter_common.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
template<InvokeType type, bool is_range, bool do_access_check>
static ALWAYS_INLINE bool DoInvoke(Thread* self,
ShadowFrame& shadow_frame,
const Instruction* inst,
uint16_t inst_data,
JValue* result)
REQUIRES_SHARED(Locks::mutator_lock_) {
// Make sure to check for async exceptions before anything else.
if (UNLIKELY(self->ObserveAsyncException())) {
return false;
}
const uint32_t vregC = is_range ? inst->VRegC_3rc() : inst->VRegC_35c();
ObjPtr<mirror::Object> obj = type == kStatic ? nullptr : shadow_frame.GetVRegReference(vregC);
ArtMethod* sf_method = shadow_frame.GetMethod();
bool string_init = false;
ArtMethod* called_method = FindMethodToCall<type>(self, sf_method, &obj, *inst, &string_init);
if (called_method == nullptr) {
DCHECK(self->IsExceptionPending());
result->SetJ(0);
return false;
}

return DoCall<is_range, do_access_check>(
called_method, self, shadow_frame, inst, inst_data, string_init, result);
}

\art\runtime\interpreter\interpreter_common.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
template<bool is_range, bool do_assignability_check>
NO_STACK_PROTECTOR
bool DoCall(ArtMethod* called_method,
Thread* self,
ShadowFrame& shadow_frame,
const Instruction* inst,
uint16_t inst_data,
bool is_string_init,
JValue* result) {
// Argument word count.
const uint16_t number_of_inputs =
(is_range) ? inst->VRegA_3rc(inst_data) : inst->VRegA_35c(inst_data);

// TODO: find a cleaner way to separate non-range and range information without duplicating
// code.
uint32_t arg[Instruction::kMaxVarArgRegs] = {}; // only used in invoke-XXX.
uint32_t vregC = 0;
if (is_range) {
vregC = inst->VRegC_3rc();
} else {
vregC = inst->VRegC_35c();
inst->GetVarArgs(arg, inst_data);
}

return DoCallCommon<is_range, do_assignability_check>(
called_method,
self,
shadow_frame,
result,
number_of_inputs,
arg,
vregC,
is_string_init);
}


template <bool is_range,
bool do_assignability_check>
static inline bool DoCallCommon(ArtMethod* called_method,
Thread* self,
ShadowFrame& shadow_frame,
JValue* result,
uint16_t number_of_inputs,
uint32_t (&arg)[Instruction::kMaxVarArgRegs],
uint32_t vregC,
bool string_init) {
...

PerformCall(self,
accessor,
shadow_frame.GetMethod(),
first_dest_reg,
new_shadow_frame,
result,
use_interpreter_entrypoint);

if (string_init && !self->IsExceptionPending()) {
SetStringInitValueToAllAliases(&shadow_frame, string_init_vreg_this, *result);
}

return !self->IsExceptionPending();
}

\art\runtime\common_dex_operations.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
inline void PerformCall(Thread* self,
const CodeItemDataAccessor& accessor,
ArtMethod* caller_method,
const size_t first_dest_reg,
ShadowFrame* callee_frame,
JValue* result,
bool use_interpreter_entrypoint)
REQUIRES_SHARED(Locks::mutator_lock_) {
if (UNLIKELY(!Runtime::Current()->IsStarted())) {
interpreter::UnstartedRuntime::Invoke(self, accessor, callee_frame, result, first_dest_reg);
return;
}

if (!EnsureInitialized(self, callee_frame)) {
return;
}

if (use_interpreter_entrypoint) {
interpreter::ArtInterpreterToInterpreterBridge(self, accessor, callee_frame, result);
} else {
interpreter::ArtInterpreterToCompiledCodeBridge(
self, caller_method, callee_frame, first_dest_reg, result);
}
}

JNI Reference

JNI has three types of JNI reference,

\libnativehelper\include_jni\jni.h

1
2
3
4
5
6
typedef enum jobjectRefType {
JNIInvalidRefType = 0,
JNILocalRefType = 1,
JNIGlobalRefType = 2,
JNIWeakGlobalRefType = 3
} jobjectRefType;

We can operate references via JNIEnv, which wraps C interface named JNINativeInterface.

\libnativehelper\include_jni\jni.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
struct JNINativeInterface {
...
jobject (*NewGlobalRef)(JNIEnv*, jobject);
void (*DeleteGlobalRef)(JNIEnv*, jobject);
void (*DeleteLocalRef)(JNIEnv*, jobject);
jobject (*NewLocalRef)(JNIEnv*, jobject);
...
jweak (*NewWeakGlobalRef)(JNIEnv*, jobject);
void (*DeleteWeakGlobalRef)(JNIEnv*, jweak);
...
};

typedef const struct JNINativeInterface* C_JNIEnv;

#if defined(__cplusplus)
typedef _JNIEnv JNIEnv;
...
#else
typedef const struct JNINativeInterface* JNIEnv;
...
#endif

struct _JNIEnv {
/* do not rename this; it does not seem to be entirely opaque */
const struct JNINativeInterface* functions;
...

jobject NewGlobalRef(jobject obj)
{ return functions->NewGlobalRef(this, obj); }

void DeleteGlobalRef(jobject globalRef)
{ functions->DeleteGlobalRef(this, globalRef); }

void DeleteLocalRef(jobject localRef)
{ functions->DeleteLocalRef(this, localRef); }

jobject NewLocalRef(jobject ref)
{ return functions->NewLocalRef(this, ref); }

jweak NewWeakGlobalRef(jobject obj)
{ return functions->NewWeakGlobalRef(this, obj); }

void DeleteWeakGlobalRef(jweak obj)
{ functions->DeleteWeakGlobalRef(this, obj); }
...
};

And JNIEnv is inherited by JNIEnvEx, actually JNIEnv* is pointed an object of JNIEnvEx.

\art\runtime\jni\jni_env_ext.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
class JNIEnvExt : public JNIEnv {
public:
// Creates a new JNIEnvExt. Returns null on error, in which case error_msg
// will contain a description of the error.
static JNIEnvExt* Create(Thread* self, JavaVMExt* vm, std::string* error_msg);
...
};

> \art\runtime\jni\jni_env_ext.cc
``` C++
JNIEnvExt* JNIEnvExt::Create(Thread* self_in, JavaVMExt* vm_in, std::string* error_msg) {
std::unique_ptr<JNIEnvExt> ret(new JNIEnvExt(self_in, vm_in, error_msg));
if (CheckLocalsValid(ret.get())) {
return ret.release();
}
return nullptr;
}

JNIEnvExt::JNIEnvExt(Thread* self_in, JavaVMExt* vm_in, std::string* error_msg)
: self_(self_in),
vm_(vm_in),
local_ref_cookie_(kIRTFirstSegment),
locals_(1, kLocal, IndirectReferenceTable::ResizableCapacity::kYes, error_msg),
monitors_("monitors", kMonitorsInitial, kMonitorsMax),
critical_(0),
check_jni_(false),
runtime_deleted_(false) {
MutexLock mu(Thread::Current(), *Locks::jni_function_table_lock_);
check_jni_ = vm_in->IsCheckJniEnabled();
functions = GetFunctionTable(check_jni_);
unchecked_functions_ = GetJniNativeInterface();
}

const JNINativeInterface* JNIEnvExt::GetFunctionTable(bool check_jni) {
const JNINativeInterface* override = JNIEnvExt::table_override_;
if (override != nullptr) {
return override;
}
return check_jni ? GetCheckJniNativeInterface() : GetJniNativeInterface();
}

[>\art\runtime\jni\jni_internal.cc]
const JNINativeInterface* GetJniNativeInterface() {
// The template argument is passed down through the Encode/DecodeArtMethod/Field calls so if
// JniIdType is kPointer the calls will be a simple cast with no branches. This ensures that
// the normal case is still fast.
return Runtime::Current()->GetJniIdType() == JniIdType::kPointer
? &JniNativeInterfaceFunctions<false>::gJniNativeInterface
: &JniNativeInterfaceFunctions<true>::gJniNativeInterface;
}

template<bool kEnableIndexIds>
struct JniNativeInterfaceFunctions {
using JNIImpl = JNI<kEnableIndexIds>;
static constexpr JNINativeInterface gJniNativeInterface = {
...
JNIImpl::NewGlobalRef,
JNIImpl::DeleteGlobalRef,
JNIImpl::DeleteLocalRef,
JNIImpl::NewLocalRef,
...
JNIImpl::NewWeakGlobalRef,
JNIImpl::DeleteWeakGlobalRef,
...
};
};

Local reference

Local JNI reference is keep in locals_ attribute of JNIEnvExt object.

\art\runtime\jni\jni_env_ext.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class JNIEnvExt : public JNIEnv {
...
// Link to Thread::Current().
Thread* const self_;

// The invocation interface JavaVM.
JavaVMExt* const vm_;

// Cookie used when using the local indirect reference table.
IRTSegmentState local_ref_cookie_;

// JNI local references.
IndirectReferenceTable locals_ GUARDED_BY(Locks::mutator_lock_);

// Stack of cookies corresponding to PushLocalFrame/PopLocalFrame calls.
// TODO: to avoid leaks (and bugs), we need to clear this vector on entry (or return)
// to a native method.
std::vector<IRTSegmentState> stacked_local_ref_cookies_;

// Entered JNI monitors, for bulk exit on thread detach.
ReferenceTable monitors_;
...
};

JNIEnv is associated to Thread, for Android application, there are two kinds of threads, Java Thread and Native Thread.

  1. Java Thread
    JNIEnvEx is created during Java thread initialization, and released when Java thread Exit.

\libcore\ojluni\src\main\java\java\lang\Thread.java

1
2
3
4
5
6
7
8
9
10
11
12
public synchronized void start() {
...
started = false;
try {
nativeCreate(this, stackSize, daemon);
started = true;
} finally {
...
}
}

private native static void nativeCreate(Thread t, long stackSize, boolean daemon);

\art\runtime\native\java_lang_Thread.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
static void Thread_nativeCreate(JNIEnv* env, jclass, jobject java_thread, jlong stack_size,
jboolean daemon) {
// There are sections in the zygote that forbid thread creation.
Runtime* runtime = Runtime::Current();
if (runtime->IsZygote() && runtime->IsZygoteNoThreadSection()) {
jclass internal_error = env->FindClass("java/lang/InternalError");
CHECK(internal_error != nullptr);
env->ThrowNew(internal_error, "Cannot create threads in zygote");
return;
}

Thread::CreateNativeThread(env, java_thread, stack_size, daemon == JNI_TRUE);
}

\art\runtime\thread.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
bool Thread::Init(ThreadList* thread_list, JavaVMExt* java_vm, JNIEnvExt* jni_env_ext) {
...
if (jni_env_ext != nullptr) {
DCHECK_EQ(jni_env_ext->GetVm(), java_vm);
DCHECK_EQ(jni_env_ext->GetSelf(), this);
tlsPtr_.jni_env = jni_env_ext;
} else {
std::string error_msg;
tlsPtr_.jni_env = JNIEnvExt::Create(this, java_vm, &error_msg);
if (tlsPtr_.jni_env == nullptr) {
LOG(ERROR) << "Failed to create JNIEnvExt: " << error_msg;
return false;
}
}

ScopedTrace trace3("ThreadList::Register");
thread_list->Register(this);
return true;
}

void Thread::CreateNativeThread(JNIEnv* env, jobject java_peer, size_t stack_size, bool is_daemon) {
...
std::string error_msg;
std::unique_ptr<JNIEnvExt> child_jni_env_ext(
JNIEnvExt::Create(child_thread, Runtime::Current()->GetJavaVM(), &error_msg));

int pthread_create_result = 0;
if (child_jni_env_ext.get() != nullptr) {
pthread_t new_pthread;
pthread_attr_t attr;
child_thread->tlsPtr_.tmp_jni_env = child_jni_env_ext.get();
CHECK_PTHREAD_CALL(pthread_attr_init, (&attr), "new thread");
CHECK_PTHREAD_CALL(pthread_attr_setdetachstate, (&attr, PTHREAD_CREATE_DETACHED),
"PTHREAD_CREATE_DETACHED");
CHECK_PTHREAD_CALL(pthread_attr_setstacksize, (&attr, stack_size), stack_size);
pthread_create_result = pthread_create(&new_pthread,
&attr,
Thread::CreateCallback,
child_thread);
...
}

void* Thread::CreateCallback(void* arg) {
Thread* self = reinterpret_cast<Thread*>(arg);
Runtime* runtime = Runtime::Current();
if (runtime == nullptr) {
LOG(ERROR) << "Thread attaching to non-existent runtime: " << *self;
return nullptr;
}
{
...
CHECK(self->Init(runtime->GetThreadList(), runtime->GetJavaVM(), self->tlsPtr_.tmp_jni_env));
self->tlsPtr_.tmp_jni_env = nullptr;
Runtime::Current()->EndThreadBirth();
}
{
...
// Invoke the 'run' method of our java.lang.Thread.
ObjPtr<mirror::Object> receiver = self->tlsPtr_.opeer;
jmethodID mid = WellKnownClasses::java_lang_Thread_run;
ScopedLocalRef<jobject> ref(soa.Env(), soa.AddLocalReference<jobject>(receiver));
InvokeVirtualOrInterfaceWithJValues(soa, ref.get(), mid, nullptr);
}
// Detach and delete self.
Runtime::Current()->GetThreadList()->Unregister(self, /* should_run_callbacks= */ true);

return nullptr;
}

\art\runtime\thread_list.cc

1
2
3
4
5
void ThreadList::Unregister(Thread* self, bool should_run_callbacks) {
...
delete self;
...
}

\art\runtime\thread.cc

1
2
3
4
5
6
7
8
9
10
11
Thread::~Thread() {
CHECK(tlsPtr_.class_loader_override == nullptr);
CHECK(tlsPtr_.jpeer == nullptr);
CHECK(tlsPtr_.opeer == nullptr);
bool initialized = (tlsPtr_.jni_env != nullptr); // Did Thread::Init run?
if (initialized) {
delete tlsPtr_.jni_env;
tlsPtr_.jni_env = nullptr;
}
...
}
  1. Native Thread

Call AttachCurrentThread to attach current native thread to JVM, and create associated Thread and JNIEnvEx object. Call DetachCurrentThread to detach current native thread from JVM, and release associated Thread and JNIEnv object.

\art\runtime\jni\jni_env_ext.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
  static jint AttachCurrentThread(JavaVM* vm, JNIEnv** p_env, void* thr_args) {
return AttachCurrentThreadInternal(vm, p_env, thr_args, false);
}

static jint AttachCurrentThreadInternal(JavaVM* vm, JNIEnv** p_env, void* raw_args, bool as_daemon) {
if (vm == nullptr || p_env == nullptr) {
return JNI_ERR;
}

// Return immediately if we're already attached.
Thread* self = Thread::Current();
if (self != nullptr) {
*p_env = self->GetJniEnv();
return JNI_OK;
}

Runtime* runtime = reinterpret_cast<JavaVMExt*>(vm)->GetRuntime();
...
if (!runtime->AttachCurrentThread(thread_name, as_daemon, thread_group,
!runtime->IsAotCompiler())) {
*p_env = nullptr;
return JNI_ERR;
} else {
*p_env = Thread::Current()->GetJniEnv();
return JNI_OK;
}
}
};

\art\runtime\jni\runtime.cc

1
2
3
4
5
6
7
8
9
10
11
bool Runtime::AttachCurrentThread(const char* thread_name, bool as_daemon, jobject thread_group,
bool create_peer, bool should_run_callbacks) {
ScopedTrace trace(__FUNCTION__);
Thread* self = Thread::Attach(thread_name,
as_daemon,
thread_group,
create_peer,
should_run_callbacks);
...
return self != nullptr;
}

\art\runtime\thread.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
Thread* Thread::Attach(const char* thread_name,
bool as_daemon,
jobject thread_group,
bool create_peer,
bool should_run_callbacks) {
auto create_peer_action = [&](Thread* self) {
...
if (create_peer) {
...
} else {
// These aren't necessary, but they improve diagnostics for unit tests & command-line tools.
if (thread_name != nullptr) {
self->SetCachedThreadName(thread_name);
::art::SetThreadName(thread_name);
} else if (self->GetJniEnv()->IsCheckJniEnabled()) {
LOG(WARNING) << *Thread::Current() << " attached without supplying a name";
}
}
return true;
};
return Attach(thread_name, as_daemon, create_peer_action, should_run_callbacks);
}

template <typename PeerAction>
Thread* Thread::Attach(const char* thread_name,
bool as_daemon,
PeerAction peer_action,
bool should_run_callbacks) {
Runtime* runtime = Runtime::Current();
...
Thread* self;
{
ScopedTrace trace2("Thread birth");
MutexLock mu(nullptr, *Locks::runtime_shutdown_lock_);
if (runtime->IsShuttingDownLocked()) {
LOG(WARNING) << "Thread attaching while runtime is shutting down: " <<
((thread_name != nullptr) ? thread_name : "(Unnamed)");
return nullptr;
} else {
Runtime::Current()->StartThreadBirth();
self = new Thread(as_daemon);
bool init_success = self->Init(runtime->GetThreadList(), runtime->GetJavaVM());
Runtime::Current()->EndThreadBirth();
if (!init_success) {
delete self;
return nullptr;
}
}
}
...
return self;
}

bool Thread::Init(ThreadList* thread_list, JavaVMExt* java_vm, JNIEnvExt* jni_env_ext = nullptr) {
...
if (jni_env_ext != nullptr) {
DCHECK_EQ(jni_env_ext->GetVm(), java_vm);
DCHECK_EQ(jni_env_ext->GetSelf(), this);
tlsPtr_.jni_env = jni_env_ext;
} else {
std::string error_msg;
tlsPtr_.jni_env = JNIEnvExt::Create(this, java_vm, &error_msg);
if (tlsPtr_.jni_env == nullptr) {
LOG(ERROR) << "Failed to create JNIEnvExt: " << error_msg;
return false;
}
}

ScopedTrace trace3("ThreadList::Register");
thread_list->Register(this);
return true;
}

\art\runtime\jni\jni_env_ext.cc

1
2
3
4
5
6
7
8
9
static jint DetachCurrentThread(JavaVM* vm) {
if (vm == nullptr || Thread::Current() == nullptr) {
return JNI_ERR;
}
JavaVMExt* raw_vm = reinterpret_cast<JavaVMExt*>(vm);
Runtime* runtime = raw_vm->GetRuntime();
runtime->DetachCurrentThread();
return JNI_OK;
}

\art\runtime\thread.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
void Runtime::DetachCurrentThread(bool should_run_callbacks) {
ScopedTrace trace(__FUNCTION__);
Thread* self = Thread::Current();
if (self == nullptr) {
LOG(FATAL) << "attempting to detach thread that is not attached";
}
if (self->HasManagedStack()) {
LOG(FATAL) << *Thread::Current() << " attempting to detach while still running code";
}
thread_list_->Unregister(self, should_run_callbacks);
}

[>\art\runtime\thread_list.cc]
void ThreadList::Unregister(Thread* self, bool should_run_callbacks) {
...
delete self;
...
}

Thread::~Thread() {
...
if (initialized) {
delete tlsPtr_.jni_env;
tlsPtr_.jni_env = nullptr;
}
...
}
  1. Add Local Reference

Local reference talbe is resizable(Global reference talbe size is not resizable), default size is 64, table size will be double when table is full, this value is fixed to 512 in old version of Android System, if overflow, application will abort and print log “JNI ERROR (app bug): local reference table overflow (max=512)”.

\art\runtime\jni\jni_internal.cc

1
2
3
4
5
6
7
8
9
static jobject NewLocalRef(JNIEnv* env, jobject obj) {
ScopedObjectAccess soa(env);
ObjPtr<mirror::Object> decoded_obj = soa.Decode<mirror::Object>(obj);
// Check for null after decoding the object to handle cleared weak globals.
if (decoded_obj == nullptr) {
return nullptr;
}
return soa.AddLocalReference<jobject>(decoded_obj);
}

\art\runtime\jni\jni_env_ext-inl.h

1
2
3
4
5
6
7
8
9
10
11
inline T JNIEnvExt::AddLocalReference(ObjPtr<mirror::Object> obj) {
std::string error_msg;
IndirectRef ref = locals_.Add(local_ref_cookie_, obj, &error_msg);
if (UNLIKELY(ref == nullptr)) {
// This is really unexpected if we allow resizing local IRTs...
LOG(FATAL) << error_msg;
UNREACHABLE();
}
...
return reinterpret_cast<T>(ref);
}

\art\runtime\indirect_reference_table.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
IndirectRef IndirectReferenceTable::Add(IRTSegmentState previous_state,
ObjPtr<mirror::Object> obj,
std::string* error_msg) {
if (kDebugIRT) {
LOG(INFO) << "+++ Add: previous_state=" << previous_state.top_index
<< " top_index=" << segment_state_.top_index
<< " last_known_prev_top_index=" << last_known_previous_state_.top_index
<< " holes=" << current_num_holes_;
}

size_t top_index = segment_state_.top_index;

CHECK(obj != nullptr);
VerifyObject(obj);
DCHECK(table_ != nullptr);

if (top_index == max_entries_) {
if (resizable_ == ResizableCapacity::kNo) {
std::ostringstream oss;
oss << "JNI ERROR (app bug): " << kind_ << " table overflow "
<< "(max=" << max_entries_ << ")"
<< MutatorLockedDumpable<IndirectReferenceTable>(*this);
*error_msg = oss.str();
return nullptr;
}

// Try to double space.
if (std::numeric_limits<size_t>::max() / 2 < max_entries_) {
std::ostringstream oss;
oss << "JNI ERROR (app bug): " << kind_ << " table overflow "
<< "(max=" << max_entries_ << ")" << std::endl
<< MutatorLockedDumpable<IndirectReferenceTable>(*this)
<< " Resizing failed: exceeds size_t";
*error_msg = oss.str();
return nullptr;
}

std::string inner_error_msg;
if (!Resize(max_entries_ * 2, &inner_error_msg)) {
std::ostringstream oss;
oss << "JNI ERROR (app bug): " << kind_ << " table overflow "
<< "(max=" << max_entries_ << ")" << std::endl
<< MutatorLockedDumpable<IndirectReferenceTable>(*this)
<< " Resizing failed: " << inner_error_msg;
*error_msg = oss.str();
return nullptr;
}
}

RecoverHoles(previous_state);
CheckHoleCount(table_, current_num_holes_, previous_state, segment_state_);

// We know there's enough room in the table. Now we just need to find
// the right spot. If there's a hole, find it and fill it; otherwise,
// add to the end of the list.
IndirectRef result;
size_t index;
if (current_num_holes_ > 0) {
DCHECK_GT(top_index, 1U);
// Find the first hole; likely to be near the end of the list.
IrtEntry* p_scan = &table_[top_index - 1];
DCHECK(!p_scan->GetReference()->IsNull());
--p_scan;
while (!p_scan->GetReference()->IsNull()) {
DCHECK_GE(p_scan, table_ + previous_state.top_index);
--p_scan;
}
index = p_scan - table_;
current_num_holes_--;
} else {
// Add to the end.
index = top_index++;
segment_state_.top_index = top_index;
}
table_[index].Add(obj);
result = ToIndirectRef(index);
if (kDebugIRT) {
LOG(INFO) << "+++ added at " << ExtractIndex(result) << " top=" << segment_state_.top_index
<< " holes=" << current_num_holes_;
}

DCHECK(result != nullptr);
return result;
}
  1. Local Reference Table Size

sizeof(IrtEntry) is 8, uint32_t and GcRoot< mirror::Object > are both 4 bytes size, GcRoot< mirror::Object > is a reference counter, finally defined a 4 byte int. So kSmallIrtEntries = kInitialIrtBytes / sizeof(IrtEntry) = 512/8 = 64.
Refer to JNIEnvExt’s construction function, max_count is 1, infer that default size is kSmallIrtEntries.

You know, android is popular all over the world, many phone brands use Android OS, or modified version. If a phone has a large memory, phone brand can defaultly set a larger value, for example 128, as the logic of IndirectReferenceTable’s construction, at last, table size will be set RoundUp(128,4096/8)=512, So, you will find the design, 64 or 512 or more, (64x2x2x2) * sizeof(IrtEntry) = PAGESIZE = 4096, to reduce cross-page access memory.

\art\runtime\indirect_reference_table.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
constexpr size_t kInitialIrtBytes = 512;  // Number of bytes in an initial local table.
constexpr size_t kSmallIrtEntries = kInitialIrtBytes / sizeof(IrtEntry);
...
class IrtEntry {
public:
void Add(ObjPtr<mirror::Object> obj) REQUIRES_SHARED(Locks::mutator_lock_);

GcRoot<mirror::Object>* GetReference() {
DCHECK_LE(serial_, kIRTMaxSerial);
return &reference_;
}

const GcRoot<mirror::Object>* GetReference() const {
DCHECK_LE(serial_, kIRTMaxSerial);
return &reference_;
}

uint32_t GetSerial() const {
return serial_;
}

void SetReference(ObjPtr<mirror::Object> obj) REQUIRES_SHARED(Locks::mutator_lock_);

private:
uint32_t serial_; // Incremented for each reuse; checked against reference.
GcRoot<mirror::Object> reference_;
};

class IndirectReferenceTable {
...
// Max_count is the minimum initial capacity (resizable), or minimum total capacity
// (not resizable). A value of 1 indicates an implementation-convenient small size.
IndirectReferenceTable(size_t max_count,
IndirectRefKind kind,
ResizableCapacity resizable,
std::string* error_msg);

~IndirectReferenceTable();
...
/// semi-public - read/write by jni down calls.
IRTSegmentState segment_state_;

// Mem map where we store the indirect refs. If it's invalid, and table_ is non-null, then
// table_ is valid, but was allocated via allocSmallIRT();
MemMap table_mem_map_;
// bottom of the stack. Do not directly access the object references
// in this as they are roots. Use Get() that has a read barrier.
IrtEntry* table_;
// bit mask, ORed into all irefs.
const IndirectRefKind kind_;

// max #of entries allowed (modulo resizing).
size_t max_entries_;
...
};

\art\libartbase\base\globals.h

1
2
3
// System page size. We check this against sysconf(_SC_PAGE_SIZE) at runtime, but use a simple
// compile-time constant so the compiler can generate better code.
static constexpr size_t kPageSize = 4096;

\art\runtime\indirect_reference_table.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
// Maximum table size we allow.
static constexpr size_t kMaxTableSizeInBytes = 128 * MB;
...
IndirectReferenceTable::IndirectReferenceTable(size_t max_count,
IndirectRefKind desired_kind,
ResizableCapacity resizable,
std::string* error_msg)
: segment_state_(kIRTFirstSegment),
table_(nullptr),
kind_(desired_kind),
max_entries_(max_count),
current_num_holes_(0),
resizable_(resizable) {
CHECK(error_msg != nullptr);
CHECK_NE(desired_kind, kJniTransitionOrInvalid);

// Overflow and maximum check.
CHECK_LE(max_count, kMaxTableSizeInBytes / sizeof(IrtEntry));

if (max_entries_ <= kSmallIrtEntries) {
table_ = Runtime::Current()->GetSmallIrtAllocator()->Allocate(error_msg);
if (table_ != nullptr) {
max_entries_ = kSmallIrtEntries;
// table_mem_map_ remains invalid.
}
}
if (table_ == nullptr) {
const size_t table_bytes = RoundUp(max_count * sizeof(IrtEntry), kPageSize);
table_mem_map_ = NewIRTMap(table_bytes, error_msg);
if (!table_mem_map_.IsValid() && error_msg->empty()) {
*error_msg = "Unable to map memory for indirect ref table";
}

if (table_mem_map_.IsValid()) {
table_ = reinterpret_cast<IrtEntry*>(table_mem_map_.Begin());
} else {
table_ = nullptr;
}
// Take into account the actual length.
max_entries_ = table_bytes / sizeof(IrtEntry);
}
segment_state_ = kIRTFirstSegment;
last_known_previous_state_ = kIRTFirstSegment;
}

\art\runtime\jni\jni_env_ext.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
JNIEnvExt::JNIEnvExt(Thread* self_in, JavaVMExt* vm_in, std::string* error_msg)
: self_(self_in),
vm_(vm_in),
local_ref_cookie_(kIRTFirstSegment),
locals_(1, kLocal, IndirectReferenceTable::ResizableCapacity::kYes, error_msg),
monitors_("monitors", kMonitorsInitial, kMonitorsMax),
critical_(0),
check_jni_(false),
runtime_deleted_(false) {
MutexLock mu(Thread::Current(), *Locks::jni_function_table_lock_);
check_jni_ = vm_in->IsCheckJniEnabled();
functions = GetFunctionTable(check_jni_);
unchecked_functions_ = GetJniNativeInterface();
  1. Delete Local Reference

Local references will be released when Native thread detach from JVM or Java thread exit. Local references will not be released if creating local reference frequently and donot delete local reference proactively in a resident thread.

\art\runtime\jni\jni_internal.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
static void DeleteLocalRef(JNIEnv* env, jobject obj) {
if (obj == nullptr) {
return;
}
// SOA is only necessary to have exclusion between GC root marking and removing.
// We don't want to have the GC attempt to mark a null root if we just removed
// it. b/22119403
ScopedObjectAccess soa(env);
auto* ext_env = down_cast<JNIEnvExt*>(env);
if (!ext_env->locals_.Remove(ext_env->local_ref_cookie_, obj)) {
// Attempting to delete a local reference that is not in the
// topmost local reference frame is a no-op. DeleteLocalRef returns
// void and doesn't throw any exceptions, but we should probably
// complain about it so the user will notice that things aren't
// going quite the way they expect.
LOG(WARNING) << "JNI WARNING: DeleteLocalRef(" << obj << ") "
<< "failed to find entry";
// Investigating b/228295454: Scudo ERROR: internal map failure (NO MEMORY).
soa.Self()->DumpJavaStack(LOG_STREAM(WARNING));
}
}

Global reference

Global reference is keeped in JVM, one process only support one JVM, so global reference table is globally unique resource.

\art\runtime\jni\java_vm_ext.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
class JavaVMExt : public JavaVM {
...
// Not guarded by globals_lock since we sometimes use SynchronizedGet in Thread::DecodeJObject.
IndirectReferenceTable globals_;

// No lock annotation since UnloadNativeLibraries is called on libraries_ but locks the
// jni_libraries_lock_ internally.
std::unique_ptr<Libraries> libraries_;

// Used by -Xcheck:jni.
const JNIInvokeInterface* const unchecked_functions_;

// Since weak_globals_ contain weak roots, be careful not to
// directly access the object references in it. Use Get() with the
// read barrier enabled.
// Not guarded by weak_globals_lock since we may use SynchronizedGet in DecodeWeakGlobal.
IndirectReferenceTable weak_globals_;
...
};

And default size is 51200, not resizable. But GC will release weak global reference when
memory is out of supply.

\art\runtime\jni\java_vm_ext.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
// Maximum number of global references (must fit in 16 bits).
static constexpr size_t kGlobalsMax = 51200;

// Maximum number of weak global references (must fit in 16 bits).
static constexpr size_t kWeakGlobalsMax = 51200;

JavaVMExt::JavaVMExt(Runtime* runtime,
const RuntimeArgumentMap& runtime_options,
std::string* error_msg)
: runtime_(runtime),
check_jni_abort_hook_(nullptr),
check_jni_abort_hook_data_(nullptr),
check_jni_(false), // Initialized properly in the constructor body below.
force_copy_(runtime_options.Exists(RuntimeArgumentMap::JniOptsForceCopy)),
tracing_enabled_(runtime_options.Exists(RuntimeArgumentMap::JniTrace)
|| VLOG_IS_ON(third_party_jni)),
trace_(runtime_options.GetOrDefault(RuntimeArgumentMap::JniTrace)),
globals_(kGlobalsMax, kGlobal, IndirectReferenceTable::ResizableCapacity::kNo, error_msg),
libraries_(new Libraries),
unchecked_functions_(&gJniInvokeInterface),
weak_globals_(kWeakGlobalsMax,
kWeakGlobal,
IndirectReferenceTable::ResizableCapacity::kNo,
error_msg),
allow_accessing_weak_globals_(true),
weak_globals_add_condition_("weak globals add condition",
(CHECK(Locks::jni_weak_globals_lock_ != nullptr),
*Locks::jni_weak_globals_lock_)),
env_hooks_lock_("environment hooks lock", art::kGenericBottomLock),
env_hooks_(),
enable_allocation_tracking_delta_(
runtime_options.GetOrDefault(RuntimeArgumentMap::GlobalRefAllocStackTraceLimit)),
allocation_tracking_enabled_(false),
old_allocation_tracking_state_(false) {
functions = unchecked_functions_;
SetCheckJniEnabled(runtime_options.Exists(RuntimeArgumentMap::CheckJni) || kIsDebugBuild);
}
  1. Add Global Referance

\art\runtime\jni\jni_internal.cc

1
2
3
4
5
6
7
8
9
10
11
static jobject NewGlobalRef(JNIEnv* env, jobject obj) {
ScopedObjectAccess soa(env);
ObjPtr<mirror::Object> decoded_obj = soa.Decode<mirror::Object>(obj);
return soa.Vm()->AddGlobalRef(soa.Self(), decoded_obj);
}

static jweak NewWeakGlobalRef(JNIEnv* env, jobject obj) {
ScopedObjectAccess soa(env);
ObjPtr<mirror::Object> decoded_obj = soa.Decode<mirror::Object>(obj);
return soa.Vm()->AddWeakGlobalRef(soa.Self(), decoded_obj);
}

\art\runtime\jni\java_vm_ext.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
jobject JavaVMExt::AddGlobalRef(Thread* self, ObjPtr<mirror::Object> obj) {
// Check for null after decoding the object to handle cleared weak globals.
if (obj == nullptr) {
return nullptr;
}
IndirectRef ref;
std::string error_msg;
{
WriterMutexLock mu(self, *Locks::jni_globals_lock_);
ref = globals_.Add(kIRTFirstSegment, obj, &error_msg);
MaybeTraceGlobals();
}
if (UNLIKELY(ref == nullptr)) {
LOG(FATAL) << error_msg;
UNREACHABLE();
}
CheckGlobalRefAllocationTracking();
return reinterpret_cast<jobject>(ref);
}

jweak JavaVMExt::AddWeakGlobalRef(Thread* self, ObjPtr<mirror::Object> obj) {
if (obj == nullptr) {
return nullptr;
}
MutexLock mu(self, *Locks::jni_weak_globals_lock_);
// CMS needs this to block for concurrent reference processing because an object allocated during
// the GC won't be marked and concurrent reference processing would incorrectly clear the JNI weak
// ref. But CC (gUseReadBarrier == true) doesn't because of the to-space invariant.
if (!gUseReadBarrier) {
WaitForWeakGlobalsAccess(self);
}
std::string error_msg;
IndirectRef ref = weak_globals_.Add(kIRTFirstSegment, obj, &error_msg);
MaybeTraceWeakGlobals();
if (UNLIKELY(ref == nullptr)) {
LOG(FATAL) << error_msg;
UNREACHABLE();
}
return reinterpret_cast<jweak>(ref);
}
  1. Delete Global Referance

\art\runtime\jni\jni_internal.cc

1
2
3
4
5
6
7
8
9
10
11
static void DeleteGlobalRef(JNIEnv* env, jobject obj) {
JavaVMExt* vm = down_cast<JNIEnvExt*>(env)->GetVm();
Thread* self = down_cast<JNIEnvExt*>(env)->self_;
vm->DeleteGlobalRef(self, obj);
}

static void DeleteWeakGlobalRef(JNIEnv* env, jweak obj) {
JavaVMExt* vm = down_cast<JNIEnvExt*>(env)->GetVm();
Thread* self = down_cast<JNIEnvExt*>(env)->self_;
vm->DeleteWeakGlobalRef(self, obj);
}

\art\runtime\jni\java_vm_ext.cc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
  void JavaVMExt::DeleteGlobalRef(Thread* self, jobject obj) {
if (obj == nullptr) {
return;
}
{
WriterMutexLock mu(self, *Locks::jni_globals_lock_);
if (!globals_.Remove(kIRTFirstSegment, obj)) {
LOG(WARNING) << "JNI WARNING: DeleteGlobalRef(" << obj << ") "
<< "failed to find entry";
}
MaybeTraceGlobals();
}
CheckGlobalRefAllocationTracking();
}

void JavaVMExt::DeleteWeakGlobalRef(Thread* self, jweak obj) {
if (obj == nullptr) {
return;
}
MutexLock mu(self, *Locks::jni_weak_globals_lock_);
if (!weak_globals_.Remove(kIRTFirstSegment, obj)) {
LOG(WARNING) << "JNI WARNING: DeleteWeakGlobalRef(" << obj << ") "
<< "failed to find entry";
}
MaybeTraceWeakGlobals();
}

Some suggestions for using JNI

1.Frequently Calling AttachCurrentThread and DetachCurrentThread is not recommended, low efficient and not necessary.

Threads attached through JNI must call DetachCurrentThread() before they exit. If coding this directly is awkward, in Android 2.0 (Eclair) and higher you can use pthread_key_create() to define a destructor function that will be called before the thread exits, and call DetachCurrentThread() from there. (Use that key with pthread_setspecific() to store the JNIEnv in thread-local-storage; that way it’ll be passed into your destructor as the argument.)

Note that this mechanism of exiting the thread and then calling the DetachCurrentThread causes a delay in releasing the Local reference, you’d better release local reference by calling DeleteLocalRef.

2.Avoid calls between java and C++, it’s inefficient, if you have to, there are ways to optimize. for example, 1) Cache method ids, field ids, and classes. 2)Avoid Java->C++->Java.

More recommendations refer to:
https://developer.android.com/training/articles/perf-jni


Android JNI
http://xiazelong.com/2022/12/16/jni/
Author
zelong xia
Posted on
December 16, 2022
Licensed under